COMMUNICATING PROCESS ARCHITECTURES 2006
Concurrent Systems Engineering Series Series Editors: M.R. Jane, J. Hulskamp, P.H. Welch, D. Stiles and T.L. Kunii
Volume 64 Previously published in this series: Volume 63, Communicating Process Architectures 2005 (WoTUG-28), J.F. Broenink, H.W. Roebbers, J.P.E. Sunter, P.H. Welch and D.C. Wood Volume 62, Communicating Process Architectures 2004 (WoTUG-27), I.R. East, J. Martin, P.H. Welch, D. Duce and M. Green Volume 61, Communicating Process Architectures 2003 (WoTUG-26), J.F. Broenink and G.H. Hilderink Volume 60, Communicating Process Architectures 2002 (WoTUG-25), J.S. Pascoe, P.H. Welch, R.J. Loader and V.S. Sunderam Volume 59, Communicating Process Architectures 2001 (WoTUG-24), A. Chalmers, M. Mirmehdi and H. Muller Volume 58, Communicating Process Architectures 2000 (WoTUG-23), P.H. Welch and A.W.P. Bakkers Volume 57, Architectures, Languages and Techniques for Concurrent Systems (WoTUG-22), B.M. Cook Volumes 54–56, Computational Intelligence for Modelling, Control & Automation, M. Mohammadian Volume 53, Advances in Computer and Information Sciences ’98, U. Güdükbay, T. Dayar, A. Gürsoy and E. Gelenbe Volume 52, Architectures, Languages and Patterns for Parallel and Distributed Applications (WoTUG-21), P.H. Welch and A.W.P. Bakkers Volume 51, The Network Designer’s Handbook, A.M. Jones, N.J. Davies, M.A. Firth and C.J. Wright Volume 50, Parallel Programming and JAVA (WoTUG-20), A. Bakkers Volume 49, Correct Models of Parallel Computing, S. Noguchi and M. Ota Volume 48, Abstract Machine Models for Parallel and Distributed Computing, M. Kara, J.R. Davy, D. Goodeve and J. Nash Volume 47, Parallel Processing Developments (WoTUG-19), B. O’Neill Volume 46, Transputer Applications and Systems ’95, B.M. Cook, M.R. Jane, P. Nixon and P.H. Welch Transputer and OCCAM Engineering Series Volume 45, Parallel Programming and Applications, P. Fritzson and L. Finmo Volume 44, Transputer and Occam Developments (WoTUG-18), P. Nixon Volume 43, Parallel Computing: Technology and Practice (PCAT-94), J.P. Gray and F. Naghdy Volume 42, Transputer Research and Applications 7 (NATUG-7), H. Arabnia ISSN 1383-7575
Communicating Process Architectures 2006 WoTUG-29
Edited by
Peter H. Welch University of Kent, Canterbury, United Kingdom
Jon Kerridge Napier University, Edinburgh, Scotland
and
Frederick R.M. Barnes University of Kent, Canterbury, United Kingdom
Proceedings of the 29th WoTUG Technical Meeting, 17–20 September 2006, Napier University, Edinburgh, Scotland
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2006 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 1-58603-671-8 Library of Congress Control Number: 2006932503 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected] LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
v
Preface Napier University in Edinburgh are very pleased to be hosting this year’s Communicating Process Architectures 2006 conference. It is perhaps appropriate that a meeting concerning simple ways of designing, implementing and reasoning about concurrent systems should be held in an institution named after the inventor of a simple, and highly concurrent, adding machine. The house in which John Napier lived forms part of the campus where the meeting is being held. This is the 29th meeting in this conference series. The first was a single day workshop, organised by Inmos, and took place in Bristol in 1985. With the success of the Transputer, we grew into an international conference, with proceedings formally published by IOS Press since March 1988. The fact that we are still here – and thriving – shows that the founding ideas have continuing relevance. Indeed, we believe they are still ahead of the game. The papers this year are as varied and wide ranging as ever and we thank all the authors for their efforts in submitting papers and returning the camera ready copies to a very tight time schedule. Subjects include various aspects of communicating process theory and their application to designing and building systems. One of the hottest current topics – safe and effective programming models for multicore processors (e.g. IBM’s Cell) – has a natural home in this community and is addressed. Other papers include a case study on large scale formal development and verification, CSP mechanisms for Microsoft’s .NET framework, parallel systems on embedded and mobile devices, modern link technology (“SpaceWire”), various applications of occam-ʌ, JCSP and JCSP.net (video processing, robotics, massive multiplayer gaming, material and biological modeling, etc.), visual design languages and tools for CSP and real-time systems, new process oriented programming and design environments, new developments of the Transterpreter, efficient cluster computing and the debugging of message-passing systems. And, still, there is The Fringe programme! We anticipate that you will have a very fruitful get-together and hope that it will provide you with as much inspiration and motivation as we have always experienced. We trust you will survive the many late nights this conference seems to provoke. Finally, we thank the Programme Committee for all their diligent and hard work in reviewing the papers and Fiona Dick and Jennifer Willies (Napier University) in making the arrangements for this meeting. Jon Kerridge (Napier University) Frederick Barnes (University of Kent) Peter Welch (University of Kent)
vi
Programme Committee Prof. Peter Welch, University of Kent, UK (Chair) Dr. Alastair Allen, Aberdeen University, UK Dr. Fred Barnes, University of Kent, UK Dr. Richard Beton, Roke Manor Research Ltd, UK Dr. John Bjorndalen, University of Tromso, Norway Dr. Marcel Boosten, Philips Medical Systems, The Netherlands Dr. Jan Broenink, University of Twente, The Netherlands Dr. Alan Chalmers, University of Bristol, UK Prof. Peter Clayton, Rhodes University, South Africa Dr. Barry Cook, 4Links Ltd., UK Ms. Ruth Ivimey-Cook, Creative Business Systems Ltd, UK Dr. Ian East, Oxford Brookes University, UK Dr. Mark Green, Oxford Brookes University, UK Mr. Marcel Groothuis, University of Twente, The Netherlands Dr. Michael Goldsmith, Formal Systems (Europe) Ltd., Oxford, UK Dr. Kees Goossens, Philips Research, The Netherlands Dr. Gerald Hilderink, Imtech ICT Technical Systems, Eindhoven, The Netherlands Prof. Jon Kerridge, Napier University, UK Dr. Adrian Lawrence, Loughborough University, UK Dr. Jeremy Martin, GSK Ltd., UK Dr. Stephen Maudsley, Bristol, UK Mr. Alistair McEwan, University of Surrey, UK Prof. Brian O'Neill, Nottingham Trent University, UK Prof. Chris Nevison, Colgate University, New York, USA Dr. Denis Nicole, University of Southampton, UK Prof. Patrick Nixon, University College Dublin, Ireland Dr. Jan Pedersen, University of Nevada, Las Vegas Dr. Roger Peel, University of Surrey, UK Ir. Herman Roebbers, Philips TASS, The Netherlands Prof. Nan Schaller, Rochester Institute of Technology, New York, USA Dr. Marc Smith, Vassar College, Poughkeepsie, New York, USA Prof. Dyke Stiles, Utah State University, USA Dr. Johan Sunter, Philips Semiconductors, The Netherlands Mr. Øyvind Teig, Autronica Fire and Security, Norway Prof. Rod Tosten, Gettysburg University, USA Dr. Stephen Turner, Nanyang Technological University, Singapore Dr. Brian Vinter, University of Southern Denmark, Denmark Prof. Alan Wagner, University of British Columbia, Canada Dr. Paul Walker, 4Links Ltd., UK Mr. David Wood, University of Kent, UK Ir. Peter Visser, University of Twente, The Netherlands
vii
Contents Preface Jon Kerridge, Frederick Barnes and Peter Welch
v
Programme Committee
vi
SpaceWire – DS-Links Reborn Barry Cook and Paul Walker
1
An Introduction to CSP.NET Alex A. Lehmberg and Martin N. Olsen
13
Performance Evaluation of JCSP Micro Edition: JCSPme Kevin Chalmers, Jon Kerridge and Imed Romdhani
31
Ubiquitous Access to Site Specific Services by Mobile Devices: The Process View Jon Kerridge and Kevin Chalmers
41
CSP for .NET Based on JCSP Kevin Chalmers and Sarah Clayton
59
pony – The occam-π Network Environment Mario Schweigler and Adam T. Sampson
77
A Study of Percolation Phenomena in Process Networks Oliver Faust, Bernhard H.C. Sputh and Alastair R. Allen
109
Portable CSP Based Design for Embedded Multi-Core Systems Bernhard H.C. Sputh, Oliver Faust and Alastair R. Allen
123
A JCSP.net Implementation of a Massively Multiplayer Online Game Shyam Kumar and G.S. Stiles
135
SystemCSP – Visual Notation Bojan Orlic and Jan F. Broenink
151
Interacting Components Bojan Orlic and Jan F. Broenink
179
TCP Input Threading in High Performance Distributed Systems Hans H. Happe
203
A Cell Transterpreter Damian J. Dimmich, Christian L. Jacobsen and Matthew C. Jadud
215
Mobile Robot Control: The Subsumption Architecture and occam-pi Jonathan Simpson, Christian L. Jacobsen and Matthew C. Jadud
225
Rain: A New Concurrent Process-Oriented Programming Language Neil Brown
237
viii
Rain VM: Portable Concurrency Through Managing Code Neil Brown
253
Native Code Generation Using the Transterpreter Christian L. Jacobsen, Damian J. Dimmich and Matthew C. Jadud
269
Compositions of Concurrent Processes Mark Burgin and Marc L. Smith
281
Software Specification Refinement and Verification Method with I-Mathic Studio Gerald H. Hilderink
297
Video Processing in occam-pi Carl G. Ritson, Adam T. Sampson and Frederick R.M. Barnes
311
No Blocking on Yesterday’s Embedded CSP Implementation Øyvind Teig
331
A Circus Development and Verification of an Internet Packet Filter Alistair A. McEwan
339
Classification of Programming Errors in Parallel Message Passing System Jan B. Pedersen
363
Compiling CSP Frederick R.M. Barnes
377
A Fast Resolution of Choice Between Multiway Synchronisations (Invited Talk) Peter H. Welch
389
Author Index
391
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
1
SpaceWire – DS-Links Reborn Barry COOK and Paul WALKER 4Links Limited, The Mansion, Bletchley Park, MK3 6ZP, UK {Barry, Paul}@4links.co.uk
Abstract. DS-links were created to provide a low-latency, high performance data link between parallel processors. When the primary processor using them was withdrawn these links largely disappeared from view but were, in fact, still being used (albeit not for parallel computing) in the Space industry. The potential for these links, with their simple implementation, led to their adoption, in modified form, for a growing range of data communication applications. In 2003, the European Space Agency published a definition of DS-links known as SpaceWire. We briefly describe the original DS-links and detail how SpaceWire has kept or modified them to produce a now popular technology with a rapidly increasing number of implementations and wide take-up.
Introduction Concurrent processing systems using a number of physically separate processors must exchange data in a timely fashion. Performance is dependant on data throughput and, more crucially, communication latency. The first system-on-a-chip processor designed for multiprocessor computing, the Transputer, contained four communication links running at 10 or 20Mb/s which, at the time, was considerably faster than existing networks, such as Ethernet, could supply. That this speed, matched to the processing capability available, could be used effectively was demonstrated by a number of impressive applications such as real-time ray-tracing graphics programs. Development of a faster processor, the T9000, required faster communication links. These were upgraded to 100Mb/s and facilities provided to multiplex many logical links (virtual channels) over a single physical link. Complex networks could be built using a routing switch, the C104. These communication links were standardised as IEEE-1355 [1]. These links and networks were described in [2]. Alas, the Transputer family was not developed further and the link technology was only used in a limited number of areas for which it was particularly well suited. One notable instance being a 1000-node system at CERN, used for data capture in high-energy physics experiments – this system proved the long-term reliability of such links. Spacecraft applications, due to the inherent difficulty of making changes after launch, require components and systems that are predictable and reliable. They also require ever more capability as missions become more adventurous. Increased data transmission throughput requirements led to consideration of new network implementations. The link technology of the T9000 was considered a good place start and, with relatively minor changes, it has become a new standard for space applications [3]. IEEE-1355 was made available in a Radiation-hardened version and used for point-topoint links in the successful missions Rosetta [4], Mars Express [5] and Venus express [6]. Early versions of SpaceWire are flying on SWIFT [7] and several others (classified for
2
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
0
1
1
1
0
1
0
0
0
1
Data Strobe Figure 1. Data-Strobe encoding
commercial or other reasons). SpaceWire is planned for use on a wide variety of different missions throughout the world. The European Space Agency plans to use SpaceWire for most, if not all, of its future missions. A number of national missions, such as Taiwan’s Argos Satellite, are using SpaceWire. Key US missions are the James Webb Space Telescope (formerly known as “Hubble 2”) [8], the Lunar Reconnaissance Orbiter [9] and GOES-R [10]. One contrast with competing technology such as Ethernet is in the difficulty of implementation. Fast Ethernet, and Gigabit Ethernet even more so, requires analogue processing of the received signals to extract data – a silicon-hungry process. SpaceWire achieves similar data rates with a minimal silicon requirement. At least one team has abandoned attempts to implement Ethernet in a form suitable for Space applications. 1.
DS-links – as Originally Defined
DS (Data-Strobe) links provide a high performance, low latency, point-to-point communication mechanism. Data is sent as self-contained packets and a higher level protocol provides message transfers. 1.1 The Physical Layer Data encoding by the use of a pair of lines provides an important facility to transmit highspeed data whilst tolerating relatively broad skew margins. Data is sent, unaltered, on a signal line known, unsurprisingly, as "Data", and the second line, known as "Strobe", carries a signal that can be used to re-create a clock at the receiver. The strobe signal is generated very simply in that if, at any bit period boundary, the data signal does not change, then the strobe signal does, see figure 1. The data clock signal is re-created simply by an exclusive-or of the data and strobe signals. This clock can be used to latch the data, noting that both the rising and falling edges must be used, and to drive the receiver circuits. This extremely simple transfer of the clock signal, without the need for phase-locked loops gives an "auto-baud" facility. Speed variation can be used to conserve power by running the link slowly when high speed is not required, as when idling and sending only null tokens, with an instant return to full-speed operation. T9000 links run at 100Mb/s and a differential signal is used (IEEE-1355: DSDE) to give transmission over a few metres of twisted-pair cable. New connectors were designed for DS links in a format giving a high density and a large number of connectors can be placed on standard circuit boards. The IEEE-1355 encoding scheme has been adopted for the IEEE-1394 standard [11] and the Apple Computer version of IEEE-1394 known as FireWire.
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
3
1.2 Character Level The sequence of bits transmitted from device to device is logically broken into a series of tokens. Each token contains a parity bit and a bit indicating that this is either a control token, with a total length of 4-bits, or a data token, with a total length of 10-bits, see Figure 2. The parity bit covers the "data" bits preceding it, and the control bit following, to give security against errors in transmission. Figure 3 illustrates a data stream, showing the dataline, and the bits covered by each parity bit. time
earlier
later
P 1 0 0
FCT – Flow Control Token
P 1 0 1
EOP – End Of Packet
P 1 1 0
EOM – End Of Message k
P 1 1 1
ESC – Escape: next control token interpreted as …
0 1 0 0
NUL – Null token
0 1 0 1
Undefined
0 1 1 0
Undefined
0 1 1 1
Undefined
P 0 a b c d e f g h
Data (a is LSB, h is MSB)
Figure 2. Tokens
earlier Tokens
time
Null
later Data: 0xA5
EOP
Null
P 1 1 1 0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0
Ranges of parity bits
Figure 3. Example stream of bits
There is no provision for re-synchronisation to the continuing data stream and the above interpretation relies on the link starting up from idle and staying in step thereafter. The inserted parity bits allow detection of errors that may cause a loss of synchronisation, and thus result in the link stopping and re-starting. Bits are continuously sent over the link – when there is no useful data to transfer null tokens are sent. In order to ensure that no data is lost, no tokens that need to be stored at the receiver (data, EOM or EOP tokens) are sent until the receiver indicates it has space for them. The receiver sends a flow control token (FCT) for each 8-bytes it can accept, several may be sent to indicate a large buffer, and they may overlap data transfers to allow uninterrupted data transmission. This low-level flow control happens frequently and is best hidden, by suitable hardware link controllers, from message handling software or hardware.
4
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
1.3 High Level Protocol Efficient transfer of data requires low overheads and sending each message as a single, possibly large, transaction appears to be best. There are two reasons why this may not be the best strategy for a system: x
x
A message that becomes blocked part-way through transfer, due to full buffers, may prevent the transfer of other, unrelated messages. This situation is likely to be a problem in networks with wormhole routers, such as T9000/C104 systems. The receiver may need to store, temporarily, incoming messages if they arrive before the receiving process has allocated space for them.
earlier
time
later
Header
32 data bytes
EOP
Header
32 data bytes
EOP
Header
1 to 32 data bytes
EOM
first packet
last packet
(a) Long (greater than 32 bytes)
Header
0 to 32 data bytes
EOM
(b) Short (less than or equal to 32 bytes)
Header
EOP
(a) Acknowledge packet Figure 4. Packet formats
Instead, the T9000 breaks messages into packets with a known maximum size (32bytes) so that receiver buffers can be allocated and network blocking is limited – see Figure 4. A short (0 to 32-byte) message is sent with an end-of-message (EOM) token. Longer messages (more than 32-bytes) are broken down and one or more 32-byte packets terminated with an end-of-packet (EOP) token are sent before a final 32-byte, or less, packet with EOM. Each packet has to be acknowledged before the next is sent so that the receiver has to buffer, at most, one packet of 32-bytes. Information to guide the routing of packets around the network is added in the form of a header at the front of the packet. Since packets terminated with EOP are as large as possible, it is possible to use a shorter than expected packet with EOP as an acknowledge. Usually this is a zero-length packet, as shown in figure 4. The flow of data and acknowledge packets is illustrated in figure 5 for a message that has been split into more than three packets.
5
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
SENDER
RECEIVER EOP
A
The receiver waits until an input instruction is issued and then replies with an acknowledge.
ACK
EOP
B
Further packets are simply acknowledged.
ACK
EOM
F
ACK
The final packet has an EOM marker.
Time Figure 5. Flow of packets and acknowledges
1.4 Scheduler Interaction Apart from automatically breaking messages into packets, and re-assembling them, the most important feature of the link engine is its close integration with the process scheduler. Processes waiting on the transfer of a message are automatically re-inserted into the process run queue as soon as the massage has been transferred. This, hardware, scheduling is very much faster than systems using interrupts and software to re-schedule processes. The improved (reduced) latency this imposes on message transfer is directly reflected in improved (greater) performance.
6
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
1.5 Routing Switches Networks of processors are connected via one or more routing switches – the C104 [12] [13]. This switch has 32 ports and a very high performance implementation. Wormhole routing is used to minimise latency; packets were forwarded toward their destinations as soon as the route was clear. Routing is based on a packet header, the first byte(s) of a packet being used to access a table in each switch and values contained in the tables used to select the appropriate output port. Output port grouping increases throughput and/or provides redundancy [13]. Configuration of routing switches is through two dedicated links which allow a set of routers to be daisy-chained together. 2.
SpaceWire
The relatively small silicon area needed to implement a DS-link is particularly attractive to users concerned to minimise volume, and weight – such as the Space industry. Some aspects of IEEE-1355 are not suited to the Space environment and others are not familiar enough to be accepted as-is. As a result, a variant link has been specified that is very similar to DS-links but contains some significant differences. The result is known as SpaceWire, or SpW and is specified in a European space industry document [3]. This section, using section heading corresponding to the last section, describes the differences between SpaceWire and IEEE-1355. 2.1 The Physical Layer Data-strobe encoding has been retained exactly as described above. Where IEEE-1355 used PECL logic for the serial interface, SpaceWire used LVDS – at much reduced power consumption. Advances in technology allow speeds of 200Mb/s to be easily obtained with test equipment such as that from 4Links [14] is specified to 400Mb/s (and actually operates to 500Mb/s) and at least one design is running at 625Mb/s. The practical limit to speed turns out to be significantly affected by the connectors chosen for SpaceWire. The connectors designed for IEEE-1355 are not considered suitable for Space applications where a more robust construction is required. Cable suitable for the wide temperature range in space, and the vacuum environment, are much thicker than needed for terrestrial applications. SpaceWire chose to use micro-miniature D type connectors with 9 pins – similar in shape and style to the serial interface connectors found on a PC, but smaller (and much more expensive!). These connectors do not attempt to provide a controlled electrical environment for differential signalling at high speed. As a result, higher speed links increasingly are limited by the performance of the connectors and this is a significant issue as speeds rise above 500Mb/s. A better connector has been designed but has not, yet, been accepted as an official alternative. 2.2 Character Level The same coding is retained as for IEEE-1355 (figure 3) but modified in the case of end-ofpacket, and extended to provide an additional feature – time codes. 2.2.1
End-of-Packet
IEEE-1355 defined the two end-of-packet markers as EOP-1 and EOP-2, noting that EOP-2 may be used as an alternative end-of-packet marker, or as an indication of error. The T9000
7
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
uses EOP-2 as an alternative end marker to provide end-of-packet and end-of-packet tokens – although it was also useful for error conditions in a network [2]. SpaceWire restricts EOP-2 to an indication of an error condition and renames it EEP – error-end-of-packet. Virtual channels, if required, cannot easily be implemented as they were in the T9000. 2.2.2
Time Codes
Several code combinations, those starting with an ESC token, were undefined in IEEE1355 (although one or two were tentatively reserved for improving link performance but never, so far as we know, appeared in a commercial chip). P
1
1
1
1
0
t7
t6
t5
t4
t3
t2
t1
t0
Figure 6. Time-code is ESC-Data
SpaceWire defines an ESC token followed by a data token as a “time code” – Figure 6. This is intended to provide a broadcast mechanism to distribute time ticks throughout a system. The data byte contains a value that is incremented each time a time-code is sent from the (single) time-code master. If the received value corresponds to the last received value plus one (modulo the size of the counter – 6-bits of the 8 available) then a correct code has been received and action can be taken. If it is not as expected then no action is taken. In all cases the local value is updated with the received value ready for the next time-code to arrive. Each routing switch in a network receives time codes and, if they contain the expected value, broadcasts them to neighbouring switches and end-nodes. Time-codes are thus spread from switch to switch/end node throughout the whole system. Redundant networks are not trees but graphs – they contain alternate routes and bidirectional links on routes cause loops. Loops are normally disastrous for broadcast traffic and thus strictly forbidden in Ethernet networks (and disabled in IP networks by reducing the network to a tree with the Spanning Tree algorithm [15]). Redundancy is a fundamental requirement in a reliable network and hence broadcast is not supported by SpaceWire – except for time codes. A time code that is received by a node that has previously seen the code will identify it, because it contains the last-seen value, as not-for-action and discard it rather than cause an infinite sequence of transmissions. The received time codes suffer delays which depend on their route through a network and jitter that depends on the link implementation – times of the order of tens of microseconds are typical in a small network. It has been shown that delay can be compensated and jitter can be reduced to the order of nano-seconds by careful design of link components [16]. End-node silicon produces time ticks and, optionally, time values when a time code is received. It is envisaged that for more stable timing a local oscillator would be phase locked to the to the incoming tick The time value has only 6-bits giving only limited information. A separate message giving coarse time information, with loose delivery requirements, is used in association with time-codes to give a complete time. For example, messages giving date and time of the next minute are sent normally and time ticks once per second indicate precisely when the next minute starts and hence the combination gives a complete, precise, date and time.
8
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
2.3 High Level Protocols Space applications are varied and although there is strong pressure to use a minimal set of protocols (in the hope of increasing inter-operability and reducing costs) it is acknowledged that more than one protocol is required. A SpaceWire network is able to carry more than one protocol at the same time and a protocol for protocols has been specified – each packet must identify the protocol it uses. Each packet, as it arrives at its destination, starts with a protocol identifier (PID), one or two bytes, which indicate how that packet should be processed [16]. One incidental benefit is that a mis-routed packet sent to a node that does not know how it should be handled can immediately reject it (mis-routing is just one of the many possible abnormal behaviours possible in the hostile Space environment). In contrast to Transputer thinking, in which the packet receiver has complete control, the first new protocol defined – and expected to be widely used – is the Remote Memory Access Protocol (RMAP). Some ingenuity in the use of SpaceWire protocols, however, would allow close integration with a scheduler and even virtual channels. For example, RMAP, is defined as a memory transfer mechanism … but its parameters could be re-interpreted. There is no reason why an RMAP “memory address” could not be treated as a virtual channel number! 2.3.1
Remote Memory Access Protocol (RMAP)
Permitting data transfers to or from memory on a remote node, RMAP [17] is seen as underpinning many higher level protocols. A packet contains the memory address at which the data is to be placed or retrieved, the number of bytes to transfer, how the transfer is to take place, what response, if any, to give, a transaction identifier, data (if a write) and limited error detection. Transfers can take place as soon as data arrives or can be delayed, buffering the data, until it has been checked for errors. Acknowledgements can be sent indicating the status of the transfer, or the transaction can be silent. There is not space here to describe this protocol in detail and it is not, in any case, completely final nor published at the time of writing. It is, however, worth mentioning two aspects of the protocol – overhead and performance. Although compact for its function, memory access necessarily requires several bytes of overhead for memory address, data length, function etc. and this seems massive compared with the T9000 virtual channel format. Although not specified as a requirement, the protocol is simple enough to allow its implementation in hardware with DMA transfers. This allows the data transfer to be completely offloaded from the CPU until an interrupt signals the end of the transfer. Several transfers can be in progress at the same time and can provide, in effect, capabilities similar to the T9000 Virtual Channel Processor (VCP), although there is a fundamental difference in security – the VCP closely controlled access at the receiving end whilst RMAP is given relatively free-reign to read/write any data is wishes. As mentioned above, we can see ways to reconcile the differences and use RMAP in a more secure manner but these have not been discussed within the SpaceWire community. 2.4 Scheduler Interaction Nothing in the SpaceWire standard is aimed at supporting close integration with a scheduler. Traditional interrupt mechanisms are assumed. Although this can supply multiprocessor performance in the way that Ethernet does, this approach fails to achieve the superb performance demonstrated by the Transputer with its integrated communications and scheduler.
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
9
2.5 Routing Switches SpaceWire routing switches are designed to be connected as networks, typically with more than one physical path between nodes to provide redundancy and enable fault tolerance. The method of directing a packet through the network is the same in principle but differs in detail from that used with C104 routing switches. Each packet contains a sequence of bytes which, although indistinguishable in form are interpreted according to location. Each routing switch interprets the first data byte as an indicator of how to route the packet – and is often called a “header” byte. The header byte may be interpreted as a physical or logical address. Physical addresses directly indicate which port of the switch is used to output the packet and that byte is deleted from the packet (“header deletion”). Valid physical addresses are 0 to 31 where ports are numbered from 1 to 31 – port 0 is used to direct the packet to the control port of the switch itself. Router configuration uses the data network – the mass (as in ‘weight’) of a separate configuration network being unacceptable. Logical addresses are used to address an internal routing table to determine which port(s) should be used to output the packet. It is possible to specify a group of ports as equivalent (“grouped adaptive routing”) to provide redundancy and/or increased bandwidth. Broadcast is NOT supported – indeed, it is forbidden; a network containing multiple paths contains loops and broadcast would induce deadlock. In this mode the header byte may be deleted or left in place – a single logical address can be used for all hops in the network from source to destination, provided all routing tables are set appropriately. 3.
Components
Although rather hidden from wide public attention by reason of being within a specialised application area, the Space industry, there are now many devices and designs for the SpaceWire variant of DS-links. There are chips implementing SpaceWire links with simple streamed data interfaces, routing switches and integrated CPU with SpaceWire devices – although the CPU’s are not high performance by today’s standards, they are optimised for radiation tolerance. Link, router and CPU designs are also available as Intellectual Property (IP) for a wide, and widening, range of implementations from Field Programmable Gate Arrays (FPGA) to full-custom Application Specific Integrated Circuits (ASIC). Several suppliers are active and generating new designs with a variety of interfaces. 3.1 Link Chips, IP and Interfaces Atmel have produced a radiation hardened IEEE-1355 chips, one having a single link, the SMCS116 (also known as T7906E) and one having three links, the SMCS332 (also known as TDSS901E). A modified version of the latter will shortly become available in a version that (nearly) implements SpaceWire – the SMCS332SpW. On the surface, a SpaceWire link design appears simple and many groups have started designing their own. There are, however, some subtle aspects that can catch the unwary – and fool test-by-simulation approaches to validation. The highest performance design we are aware of reaches 625Mb/s. 4Links has produced six generations of SpaceWire design for various devices and in various styles – it is surprising how many different ways there are to implement the specification. We have found some areas of the design to be more prone to error than others and the design tools available fail to give the necessary support. Some design style / tool
10
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
combinations lead to less reliable results than others. Our focus is on test equipment and the designs we now use have proved very reliable – using a design style that is well supported by design tools – although not the smallest possible implementation. We are considering releasing some of the lower-performing versions of link and support functions in netlist form. Due to the variety of possible designs it is difficult to give a definitive size estimate – further complicated by the ability to choose buffer sizes and optional features such as timecodes. Very roughly, the smallest designs are in the region of a one to five thousand FPGA gates (use your favourite conversion factor to get ASIC gates). One design from 4Links, including time codes, a 16-byte receive buffer (SpaceWire allows 8 to 56 bytes), capable of link speeds exceeding 50Mb/s and implemented on a Xilinx™ Virtex 2 Pro™ FPGA requires 173 flip-flops and 371 (4-input) LUT’s, “6000 gates” – just 2.5% of a V2P20. Power consumption is equally difficult to state, in the general case, as it varies with features, clock rates / link speeds and process technology. 3.2 Router Chips, IP and Boxes By C104 standards, the routing switches available and forthcoming are modest – 8-ports compared with C104’s 32-ports. ESA is funding a routing switch design and first silicon is due by mid-2007. This will have a local port as well as 8 SpaceWire ports. 4Links offers IP for a simple routing switch in Xilinx™ FPGA. They also offer a flexible routing switch in their standard equipment range with considerably enhanced capabilities. Each unit can be configured as one or more routers, and/or a router may span more than one unit. Large routing switches can thus be built to match, or even exceed, the C104 capability. 3.3 Access to SpaceWire 4Links commercial SpaceWire interfaces use TCP/IP data streams which are supported by virtually all available operating systems – drivers being supplied as part of the operating system. Experience has shown that this approach guarantees both portability and performance – it is easily possible to achieve throughputs above 95% of that theoretically available on 100Mb/s Ethernet. Gigabit Ethernet is used to support the faster SpaceWire interfaces and performance figures here are equally encouraging. Users are provided with access to the low-level TCP data stream and simple APIs, for which the full source code is supplied. APIs are currently available in C and Java. A simple example of the C interface is shown in Listing 1. It has been demonstrated that it is possible to integrate network-based communication as channels in languages such as occam [SPoC / KRoC]. 4.
Conclusion
Far from being a technology of the past, DS-links – in the form of SpaceWire – are actively being developed. As with Transputer links, take up followed standardisation. There is an understandable reluctance to adopt an unknown technology but the decision is easier when there is an approved standard. The process is slow and relies on long-term commitment from advocates who not only tell the world there are benefits but demonstrate them. 4Links have a demonstration fault-tolerant network that has been a significant factor in convincing engineers and, perhaps more importantly, managers that they would benefit from adopting SpaceWire.
B. Cook and P. Walker / SpaceWire – DS-Links Reborn
11
#include "EtherSpaceLink.h" char buffer[1024]; int n, EOP; ... // Open a connection to the interface EtherSpaceLink link = EtherSpaceLink_open( “192.168.0.24” ); // Set the SpaceWire link speed EtherSpaceLink_set_speed ( link, 200 ); // Send a 40-byte packet EtherSpaceLink_write_packet( link, buffer, 40, EtherSpaceLink_EOP ); // Receive a packet n = EtherSpaceLink_read_packet( link, buffer, 1024, &EOP ); // Close connection EtherSpaceLink_close( link );
Listing 1. Example use of the C user interface
Point-to-point data communication using IEEE-1355 enabled users in the Space industry to gain experience with the technology and see its advantages. Initial applications for SpaceWire are also simple point-to-point connections. The use of networks to provide redundancy and fault tolerance is still developing. To date, communication has been between a controlling processor and remote data sources and sinks. Future plans for the remote data controller – known as a Remote Terminal Controller (RTC) – include a processor. Thus we see a trend toward multiprocessor systems and issues of reliability will generate interest in CSP [19] and related developments. There is active development of integrated processors and links but no indication that a Transputer-like design is planned. Parallel processing is an emerging interest – a single chip four processor design [20] is well advanced – primarily to supply future processing demands. Many of the research results previously developed will have a new application in the Space industry and safety-critical systems in general. Many ideas concerning reliable computing that were not taken seriously are important to the Space industry and, in turn, to other industries due to their Space pedigree. Industrial DS-links have spun-in to Space, been reworked as SpaceWire, and are set to Spin-out again to industries that weren’t ready for them when originally offered. Associated theories of guaranteed behaviour by networks and software may very well follow. References [1]
[2] [3]
IEEE 1355-1995: Standard for Heterogeneous InterConnect (HIC) (Low Cost Low Latency Scalable Serial Interconnect for Parallel System Construction), IEEE Standards Department, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331, USA Barry M. Cook, Data-Strobe Links and Virtual Channel Processors, in A. Bakkers editor “Parallel Programming and Java”, Proceedings of WoTUG-20, IOS Press, 1997, ISBN 90 5199 336 6 European Cooperation on Space Standardization, ECSS-E-50-12A SpaceWire – Links, nodes, routers and networks (24 January 2003)
12 [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
[16] [17] [18] [19] [20]
B. Cook and P. Walker / SpaceWire – DS-Links Reborn Rosetta Mission Home Page, European Space Agency, http://sci.esa.int/rosetta/, last visited: August, 2006. Mars Express Home Page, European Space Agency, http://mars.esa.int/, last visited: August, 2006. Venus Express Home Page, European Space Agency, http://sci.esa.int/venusexpress/, last visited: August, 2006. Swift Gamma-Ray Burst Mission, NASA Goddard Space Flight Center, http://swift.gsfc.nasa. gov/docs/swift/swiftsc.html, last visited: August, 2006. James Webb Space Telescope, NASA, http://www.jwst.nasa.gov/, last visited: August, 2006. Lunar Reconnaissance Orbiter, NASA Goddard Space Flight Center, http://lunar.gsfc.nasa. gov/missions/, last visited: August, 2006. Geostationary Operational Environmental Satellite (GOES) R series, NASA, http://science.hq. nasa.gov/missions/satellite_67.htm, last visited: August, 2006. IEEE-1394-1996 High Speed Serial Bus, IEEE Standards Department, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331, USA ST C104 Asynchronous Packet Switch, data sheet, SGS-Thomson Microelectronics Networks, Routers and Transputers, Ed. M D May, P W Thompson & P H Welch, IOS Press, ISBN 90 5199 129 0 4Links Home Page, http://www.4links.co.uk, last visited: August, 2006. IEEE 802.1D Standard for Local and Metropolitan Area Networks – Media Access Control (MAC) Bridges, IEEE Standards Department, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331, USA Barry Cook, Reducing Time Code Jitter, International SpaceWire Seminar, ESTEC, November 2003 (available at www.4links.co.uk/reducing-time-code-jitter.pdf)Low jitter TC PID – Process Identifiers, working document of the SpaceWire working group RMAP – Remote Memory Access Protocol, working document of the SpaceWire working group C.A.R. Hoare, Communicating Sequential Processes, Prentice-Hall, London, UK, 1985. André L.R. Pouponnot, A Giga Instruction Architecture (GINA) for the future ESA microprocessor based on the LEON3 IP core, European Space Agency, ESTEC, Noordwijk, The Netherlands
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
13
An Introduction to CSP.NET Alex A. LEHMBERG and Martin N. OLSEN Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100, Copenhagen, Denmark {alex , nebelong} @diku.dk Abstract. This paper reports on CSP.NET, developed over the last three months at the University of Copenhagen. CSP.NET is an object oriented CSP library designed to ease concurrent and distributed programming in Microsoft.NET 2.0. The library supports both shared memory multiprocessor systems and distributed-memory multicomputers and aims towards making the architecture transparent to the programmer. CSP.NET exploits the power of .NET Remoting to provide the distributed capabilities and like JCSP, CSP.NET relies exclusively on operating system threads. A Name Server and a workerpool are included in the library, both implemented as Windows Services. This paper presents CSP.NET from a users perspective and provides a tutorial along with some implementation details and performance tests. Keywords. CSP library, Microsoft.NET, CSP.NET
Introduction In as little as one year from now it will be very hard to get hold of computers with only one core. In order to increase performance significantly in future generation microprocessors all major manufacturers are taking the road of multiple cores on a chip, and multiple chips in a machine. These multi-chip multi-core machines will probably not run at much higher clock speeds than current machines, meaning that programs will have to use multiple threads of execution for any significant performance improvements to materialise. As any skilled programmer will testify writing large error free concurrent programs in any mainstream programming language is extremely difficult at best. Threads, and various locks and synchronisation mechanisms, are the common constructs used to achieve concurrency, but they are all low level constructs unable to express the complex interactions in concurrent programs in a simple and secure manner. Communicating Sequential Processes (CSP) [5] as a programming model builds on the CSP algebra and provides a series of higher level constructs that solves many of the problems inherent in traditional thread programming. CSP makes it easy to distinguish between deterministic and nondeterministic parts of concurrent programs, and makes synchronisation and concurrent execution relatively simple. occam [9] is a language inspired by CSP, but CSP-like libraries for more widespread languages also exist. They include JCSP [4], JCSP.NET [6], CTJ [7], C++CSP [3] and C++CSP Networked [2]. This paper describes a new CSP library developed as a graduate student project at the University of Copenhagen under the supervision of Brian Vinter. The background was a graduate course in practical CSP programming taught during the spring of 2006. CSP.NET is written for the Microsoft .NET platform, which in principle makes CSP available to programmers using any CLS-compliant language. CSP.NET is designed to run equally well on shared memory multiprocessor systems and distributed memory architec-
14
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
tures. It is designed to abstract away the underlying system and provides distributed versions of most of the implemented CSP operators. It is based on version 2.0 of the .NET framework and thus supports generics throughout. Remoting - the .NET equivalent of Java RMI - is used to provide the distributed capabilities of CSP.NET. This paper describes CSP.NET from a users perspective and provides some insight into the implementation. Section 1 is a brief overview of the different elements in CSP.NET. Section 2 is a tutorial that demonstrates the use of CSP.NET through a series of examples. Section 3 provides some technical insight into the implementation of CSP.NET and finally section 4 demonstrates a CSP.NET implementation of the MonteCarlo Pi algorithm along with some interesting performance figures. Readers are assumed to have knowledge of programming and CSP. CSP.NET can be downloaded from www.cspdotnet.com and we must stress that currently, not all features have been thoroughly tested. Thus the state of the program can best be described as a work in progress - we did say that thread programming was hard - and feedback is most welcome. 1. Library Details CSP.NET is an object oriented implementation of CSP, designed to simplify concurrent and parallel programming on a Microsoft.NET 2.0 platform. The API of CSP.NET is inspired by JCSP.NET, but the implementation is completely original. CSP.NET offers constructs like barrier, bucket and parallel but in contrast to JCSP.NET, CSP.NET provides both local and distributed implementations of these constructs. Furthermore the distributed channel ends and timers of CSP.NET differs from their counterparts in JCSP.NET. Every method in CSP.NET has been separately tested and the authors have used the library in several minor applications. The entire documentation of CSP.NET is available at www.cspdotnet.com and this section gives a brief introduction to the main constructs in the library. 1.1. Processes The object oriented approach implies that every CSP construct is implemented as a class in CSP.NET and hence a new process is constructed by creating a class that implements the ICSProcess interface. Processes may be executed in parallel by using an instance of the Parallel class. Since all processes in a given instance of Parallel are executed locally, true parallelism will only occur on a multiprocessor machine. Otherwise the processes will be interleaved. To ease distributed programming, CSP.NET provides a distributed Parallel class similar to the standard Parallel class, but DistParallel seeks to execute processes on remote machines by utilising the CSP.NET workerpools, see section 3.5. 1.2. Channels So far we have discussed how to define and run several different processes, each containing sequential code, but as the name CSP implies processes must be able to interact. Interaction or process communication is managed by channels, which makes them a central part of any CSP implementation. CSP.NET provides four distinct channels - One2One, Any2One, One2Any and Any2Any. These are all well known rendezvous channels that may be extended with buffers. The library comes with two predefined buffers and additional buffers can be defined by implementing the IBuffer interface. The available buffers are the standard FIFO buffer and an infinite buffer which, in theory, is able to hold an infinite number of elements.
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
15
1.2.1. Anonymous and Named Channels CSP.NET channels are either anonymous or named. Named channels are by default distributed but can be declared as local while anonymous channels are always local. Distributed channel ends allow communication between processes residing on different machines or in different application domains on the same machine. Local channels only allow communication between processes in the same application domain. Distributed channel ends may be used for local communication but local channels are preferable due to their more efficient implementation, see section 3.2. We have chosen not to implement a named Any2Any channel, but an anonymous Any2Any channel is available - see section 3.1. 1.2.2. Name Server To use distributed channel ends a Name Server must be available on the network. The Name Server distinguishes channels by name, thus every named channel must have a unique name. Violations of this rule are not necessarily recognised by the Name Server but may result in erroneous programs. The Name Server is provided as both a standard console application and as a Windows Service. The console version is configured though the command line while the service uses an XML configuration file. 1.2.3. Channel Communication Channels in CSP.NET are generic and may be of any serializable data type, thus making it possible to send almost everything through a channel. But caution must be exercised CSP.NET channels don’t necessarily copy the data like the CSP-paradigm demands. Distributed channels are call-by-value while local channels are call-by-reference. This is a tradeoff between safety and efficiency and, if preferred, copies can be made before sending data through a local channel. 1.3. Alternative Alternatives permit the programmer to choose between multiple events - in CSP.NET known as guards. Four types of guards are available in CSP.NET - One2One channel, Any2One channel, CSTimer and Skip. Skips are always ready, timers are ready whenever a timeout occurs and the channels are ready if they contain data. The channels may be distributed channels residing on remote machines, while the timers and skips must be local. To choose between ready guards Alternative provides two methods - PriSelect and FairSelect. The former always selects the guard with the highest priority while the latter guaranties a fair selection, meaning that every ready guard will be selected within n calls to FairSelect, where n is the number of ready guards. Like JCSP, C++CSP and KRoC, FairSelect in CSP.NET delivers unit time for each choice, regardless of the number of guards, provided that at least one guard is always pending. In CSP.NET the same applies to PriSelect. 1.4. Barriers and Buckets CSP.NET provides barriers and buckets to synchronise multiple processes. Any process synchronising on a barrier will be blocked until all processes enrolled on the barrier has called the Sync-method and any process falling into a bucket will be blocked until another process call the bucket’s Flush-method. Barriers and buckets are either anonymous or named and just like channels, anonymous barriers and buckets are local while named barriers and buckets can be local or distributed. Distributed barriers and buckets use the Name Server, meaning that every named barrier and bucket must have a unique name.
16
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
1.5. Workerpool Service CSP.NET includes a workerpool Service along with the standard library. It’s a Windows Service capable of running on any Microsoft.NET 2.0 platform. Once installed every CSP.NET program can use the service, by using the DistParallel class, which may be convenient, e.g. in grid-like programming. The workerpool service needs information about port numbers, IP addresses etc. and to that end an XML configuration file is supplied. The configuration file is read each time the service is started. 2. Tutorial This section demonstrates how to write some fairly simple programs using the CSP.NET library. We will start by implementing a workerpool and then move on to demonstrate the use of alternatives and distributed parallels in CSP.NET. 2.1. Workerpool Our First CSP.NET program shows how to implement a workerpool. Note that the workerpool in this example has no relation to the workerpool Service provided by CSP.NET.
Figure 1. Workerpool structure
Figure 1 illustrates that we need a workerpool process connected to some worker processes through an One2Any channel and we also need an Any2One channel connecting the workerpool to the outside world. Translating the figure into a CSP.NET program is very easy. All we need is a workerpool process, some workers and of course a main method. 2.1.1. Workerpool Process The workerpool process is shown in listing 1. The implementation demonstrates process creation, channel connection/creation and channel communication in CSP.NET. Defining a process is very simple, just implement the Run-method on the ICSProcess interface.
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
17
class WorkerPool : ICSProcess { private IChannelIn listenChannel; private IChannelOut workers; public WorkerPool(IChannelOut workerChannel) { workers = workerChannel; } public void Run() { listenChannel = Factory.GetAny2OneIn("WorkerPool");
}
}
while (true) { ICSProcess p = listenChannel.Read(); workers.Write(p); }
Listing 1. Workerpool process code
The first thing to notice about the channels in the WorkerPool class is that they are declared as IChannelIn and IChannelOut channels. All CSP.NET channels implement these interfaces and hence the WorkerPool constructor will accept any of the four available channels. This is the standard way to declare channels in CSP.NET. Connecting to a channel or creating a channel is often done through the static Factory class like the listenchannel in listing 1. In this particular case a named Any2One channel is created but the Factory class includes get-methods for every available channel. Creating named channels involves two steps - creating the channel object, and connecting to the channel object. As explained in section 3.1 the channel is always created on the One-end of channels. That means that an Any2One channel is created by the reader and an One2Any channel is created by the writer. The One2One channel has two One-ends and is, by definition, created by the writer. The other end of the channel simply has to connect to the channel object. If it tries to connect to a channel before it has been created the process will block until the channel object is created by another process. To avoid deadlock when creating and connecting to named channels, it is recommended only to create named channels in the run-method of CSP processes. Another notable point in listing 1 is the channel communication. The Write-method is used for writing data to a channel and the Read-method is used for reading data from a channel. Every channel in CSP.NET has a Write- and a Read-method. 2.1.2. Workers With the workerpool process in place we move on to the implementation of the workers, which is shown in listing 2. The Worker process is similar to the Workerpool process and this is the typical appearance of processes in CSP.NET. A Worker simply reads a process from a channel before executing it by calling it’s Runmethod. The process will run in the Worker’s thread of execution and no new processes will be accepted until the current process is done. 2.1.3. Main Method The only thing left is our main program shown in listing 3. The Init-method informs CSP.NET about the location of the Name Server( port number and IP-address) and registers the port number and IP-address of the current program. The
18
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
class Worker : ICSProcess { private IChannelIn processes; private ICSProcess process; public Worker(IChannelIn listenChannel) { processes = listenChannel; }
}
public void Run() { while (true) { process = processes.Read(); process.Run(); } }
Listing 2. Worker process code
class Program { static void Main(string[] args) { CspManager.Init("9091","9090"); One2AnyChannel chan = new One2AnyChannel(); Worker[] workers = new Worker[3]; for (int i = 0; i < 3; i++) workers[i] = new Worker(chan); WorkerPool workerPool = new WorkerPool(chan); new Parallel(new ICSProcess[]{workers[0], workers[1], workers[2], workerPool}).Run(); }
}
Listing 3. Main program code
Init-method must be called in the beginning of every CSP.NET program. In this case the Name Server is running on the local machine and we only have to supply the port numbers. As opposed to the Workerpool process that created a named channel the main program creates an anonymous channel. It’s possible to create anonymous channels through the Factory class but normal instantiation is often used instead. To get our workerpool running we simply instantiate the workerpool process and the worker processes in listing 1 and 2 before using the Parallel class to run them in parallel. Note that the Parallel class doesn’t run processes on remote machines, but only locally. 2.2. Alternative and Distributed Parallel Our second CSP.NET program demonstrates the Alternative class and the DistParallel class. It contains two AddIntegers processes that repeatedly add a sequence of numbers and return the result, and an AltReader process that reads and process the results. The processes are shown in listing 4 and 5. Again we notice the familiar pattern of a CSP.NET process and we notice the serializable attribute used to make a class serializable. This is necessary in order to distribute the processes to remote machines.
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
19
[Serializable] class AddIntegers : ICSProcess { private int number; private string channelName; public AddIntegers(int num, string name) { number = num; channelName = name; } public void Run() { IChannelOut result = Factory.GetOne2OneOut(channelName); for(int j = 0; j < 100; j++) { int res = 0; for (int i = 1; i < number; i++) res += i; result.Write(res); }
}
}
Listing 4. AddIntegers process code
Listing 5 demonstrates the use of the Alternative class described in section 1.3. Notice the presence of the CSTimer, causing the program to terminate if new data isn’t available within one second. The main program, shown in listing 6, is trivial. DistParallel distributes the processes to available remote machines running the CSP.NET workerpool Service. Like the run-method in Parallel, DistParallel’s run-method blocks until all processes has been executed once. It’s appropriate to use the DistParallel in a lot of applications, but there is of course a number of problems where automatic distribution of the processes aren’t appropriate, e.g. peer two peer applications. CSP.NET includes the DistParallel class to be used when appropriate. 2.3. Transparency As pointed out earlier one of the main objectives in CSP.NET is to keep the architecture transparent to the programmer and the last example program exhibits this transparency. The programmer doesn’t have to pay any attention to the architecture because the DistParallel class will utilise remote workers if they are available, and otherwise create and use local workers. 3. Implementation Details 3.1. Distributed Applications and .NET Remoting In CSP.NET distributed applications are not only applications residing on different machines and communicating through the network. They can also be applications that consist of multiple communicating programs on a single machine, or they can be a combination of the two. Regardless of the number of machines and programs involved, all communication between distributed applications is done through .NET remoting.
20
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
[Serializable] class AltReader : ICSProcess { public void Run() { AltingChannelIn plus = Factory.GetOne2OneIn("plus"); AltingChannelIn minus = Factory.GetOne2OneIn("minus");
}
}
CSTimer timer = new CSTimer(); Alternative alt = new Alternative(new Guard[] { plus, minus, timer }); int result = 0; bool done = false; while (!done) { timer.RelativeTimeOut(1000); switch (alt.FairSelect()) { case 0: result += plus.Read(); break; case 1: result −= minus.Read(); break; case 2: done = true; break; } }
Listing 5. AltReader process code
class AltProgram { static void Main(string[] args) { CspManager.Init("9092", "9090", "192.0.0.1","192.0.0.2");
}
}
new DistParallel(new ICSProcess[]{new AltReader(), new AddIntegers(100, "plus"), new AddIntegers(100, "minus") }).Run();
Listing 6. Distributed-parallel program code
Remoting is a simple way for programs running in one process to make objects accessible to programs in other processes, whether they reside on the same machine or on another machine on the network. Remoting is similar to Java RMI and is very easy to customise and extend. That is exactly what is done in CSP.NET in order to facilitate code transfer between machines. The structure of the remoting system in CSP.NET is shown in figure 2. In the CSP.NET case the formatter sink is a binary formatter that serializes all messages into a relatively compact binary format for efficient transport across the wire. The transport sink uses a simple TCP-channel which is fast and reliable. Given the flexibility of Remoting it would be trivial to replace the Binary formatter with a SOAP formatter or the Transport sink with a HTTP channel. The CSP.NET sink and the CSP.NET proxy are discussed in section 3.3. All constructs in CSP.NET, that are in some way distributed, uses remoting behind the scenes. That goes for named channels like One2One,
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
21
Client object Server object
CSP.NET proxy
Formatter sink
Formatter sink
CSP.NET sink Network Transport sink Process 1
Transport sink Process 2
Figure 2. The remoting system structure in CSP.NET
Any2One and One2Any as well as for the named versions of Barrier and Bucket. With regard to the channels the real object - the server object - is always placed on the One-end and the proxies are created on the Any-ends. That means that processes on the Any-end can go out of scope without problems, as they only contain a proxy and not the real object. Only when the ”One” end goes out of scope does the channel cease to function. On channels where no Oneend exists it is impossible to know where best to put the real object and the user runs the risk of breaking the channel every time a process using the channel goes out of scope. It’s possible to implement an Any2Any channel through a combination of an Any2One channel and an One2Any channel but the performance of such a channel would not be on par with the other channels in CSP.NET. That is the reason no named Any2Any Channel exists in CSP.NET. While remoting has many advantages it is probably not as efficient as a tailor-made solution. In order to minimise the performance overhead we have chosen the fastest combination of formatter sink and transport sink. To further make sure that optimal speed is obtained wherever possible, CSP.NET offers a few mechanisms that optimise named constructs that do not cross process boundaries - see section 3.2. 3.2. Standalone Applications When programming applications that are designed to run on more than one processor or machine, it is often impossible to know the exact runtime environment in advance. If a program is designed never to be run on multiple machines or communicate over the network there is no need to use named channels, barriers or buckets. If it is uncertain whether the application will run on one or more machines it is wise to take that fact into consideration when designing the application. That would typically be done by using processes communicating through named channels and using named barriers and buckets in the parts of the program most likely to be run on separate machines. Alternatively it could mean using workerpools to offload heavy computations to other machines if present. Under normal circumstances that would mean poorer performance given the overhead incurred by the use of Remoting. To ensure that all applications that use named constructs and workerpools run at optimum speed CSP.NET features a method called CspManager.InitStandAlone By calling CspManager.InitStandAlone at the beginning of a program you tell CSP.NET that the application does not share named CSP.NET constructs with other applications. Nor will there be any remote workerpools available at runtime, and local workerpools will be used instead. That means that neither a stand-alone Name Server nor the use of Remoting is necessary, which results in maximum performance. CspManager.InitStandAlone is particularly useful when the program is designed for both distributed and non-distributed environments. In the non-distributed case the use of CspManager.InitStandAlone will yield the same performance as if the application had been written specifically for a single machine.
22
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
In distributed applications where CspManager.Init is called at the beginning of the program all named CSP.NET constructs are by default distributed. That means that they are created through the use of the stand-alone Name Server and communicate through Remoting. There might be situations, even in distributed applications, in which local named CSP.NET constructs are desirable. To accommodate such needs all named CSP.NET constructs can be created with a parameter specifying that they should bypass remoting and the Name Server Service. 3.3. Remote Code Transfer The distributed nature of CSP.NET means that it is possible for a program on one machine to send an object over a distributed channel for another machine to use. If the object is an instance of an ordinary class any program that sends or receives that object would need the correct assembly, in order to compile. It is another matter if the channel is defined to transport objects implementing a specific interface and the programs on either side only invokes methods that are defined on that interface. Only the interface needs to be known for the program to compile - that would be the case with a channel transporting CSP processes implementing the ICSProcess interface. Even though the program may compile it will certainly throw an exception when run if the code that implements the interface is missing. We want CSP.NET to handle exceptions thrown because of missing assemblies without the user noticing that anything is amiss. One way to solve the problem would be to continuously monitor objects sent through named channels. If the code for the object was unavailable on the other end of the channel the missing code would be sent over the channel prior to the actual object. Another possibility would be to always send the relevant code along with the object. The first option would mean a lot of unnecessary communication over the network and the second would mean sending a lot of redundant code. To avoid chatty interfaces and redundant code transfers CSP.NET employs the flexibility of the Remoting system to deal with missing assembly files. As illustrated in figure 2 a custom proxy and a custom sink are inserted into the Remoting system on the client side. Any exception thrown because of missing code is caught in either the proxy or the sink. When that happens a recursive check is made of the missing assembly and any other referenced assemblies. All the missing assemblies are then copied to the machine on which they are needed. They are placed in a special location that CSP.NET always checks when looking for assemblies. 3.4. ThreadPool At the heart of any implementation of CSP lies the management of the threads used when a set of CSP Processes are run in the Parallel construct. How the thread management is implemented depends on various considerations: how many threads do we want running concurrently, how transparent should the thread management be and what limitations do the OS impose on the use of threads. The design of the thread management system in CSP.NET is very much dictated by the fact that it is meant to be a native .NET implementation of CSP. Lightweight threads - in Windows called fibers - are not available in the .NET API, which means that CSP.NET uses real heavyweight OS threads. As thread creation and destruction are relatively heavy operations, CSP.NET implements a threadpool to manage busy and free threads. The Parallel class and the CSPThreadPool class are closely connected, as the threadpool makes sure that all CSP processes run in a Parallel are allocated a free thread. After a parallel run completes, all threads are returned to the threadpool but they are not released, unless the programmer explicitly requests the threadpool to do so. The principle of always freeing, but not releasing, threads that are not
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
23
currently executing means that a CSP.NET program will never use more threads than are executing concurrently. The default, and recommended, stack size of threads in .NET is 1 MB. That means that programs with many CSP processes executing concurrently, requires lots of memory to run, compared to sequential programs. In CSP.NET it is possible to manually control the maximum stack size of new threads. That means that processes that are known to be frugal in their use of stack memory can be allocated less memory. It would have been entirely possible to use the existing ThreadPool class in .NET but that would have meant less flexibility. The .NET threadpool does not allow for the programmer to specify neither maximum stack size nor thread priority, thus making it impossible to assign individual priorities to individual threads. By managing our own threadpool none of those limitations apply to CSP.NET. 3.5. Workerpool and Distributed Parallel Workpools are often used in distributed applications to achieve load balancing and the workerpool1 in CSP.NET is very similar to a standard centralised workpool [10]. It’s implemented as a Windows Service and once started the Service will register a listenchannel and the number of available workers on the Name Server. The Name Server is responsible for managing the workers and hence every process in need of a worker must contact the Name Server, which is done implicitly through the DistParallel class. Whenever a DistParallel object requests a worker from the Name Server a listenchannel, connecting the object to a workerpool with free workers, is returned and the number of available workers on the specific workerpool is decremented. The worker itself will read and run one ICSProcess before informing the Name Server that it is ready for new jobs/processes. At the same time, the worker informs the DistParallel object that it is done, hence allowing the Run-method to return when all processes have executed once. The Name Server will increment the number of available workers for the given workerpool. By keeping score of available workers we ensure that processes only connect to workerpools with free workers thereby avoiding starvation. The downside of this approach is that we have to contact the Name Server in order to get free workers and we can only write one process to each worker. 4. Tests Even though we attempt to demonstrate the performance of some aspects of CSP.NET, this section is by no means intended to be a thorough and comprehensive benchmark. No attempt is made to compare CSP.NET to other paradigms and only one small comparison is made with existing CSP libraries. All single machine performance tests have been run on a machine containing: Pentium 4M 2 GHz processor, 512 MB ram, Windows XP SP2, .NET 2.0, Java 5.0. The distributed test was run on the single machine setup plus a second machine containing: Pentium M 2GHz, 1 GB ram, Windows XP SP2 and .NET 2.0. The two machines were connected directly through a 100 Mbps network. 4.1. Channel Performance We have not done any extensive performance comparisons between CSP.NET and other implementations of CSP like JCSP, C++ CSP and KRoC(occam). Mostly because the main fo1 We are deliberately using the term workerpool instead of workpool, since it’s a pool of workers and not a pool of work/tasks.
24
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
cus has been ease of use and transparency rather than high performance, but also because comparing the performance of CSP.NET to C++ CSP and KRoC is irrelevant given the differences in functionality, see section 3.4. Given the similarities of CSP.NET and JCSP we have done one small test to illustrate the performance of the various channels. The test consist of three processes. Process A send an integer to Process B which reads the number and sends it to Process C which in turn reads the number and sends the number back to Process A. That means that Process A only sends another number when the previous one has passed through all the processes in the loop. Channel type One2One Any2One One2Any Any2Any
JCSP(jre 5.0) 16975 17165 16854 17125
CSP.NET(.NET 2.0) 12488 17825 12257 17524
Table 1. Comparing the performance of different anonymous channel types in JCSP and CSP.NET. All times are in milliseconds.
Table 1 shows the results of the test. The numbers are in milliseconds and indicates the time it takes to send 500,000 integers through the loop. The measurements reveal some clear differences between JCSP and CSP.NET. For all channels in JCSP the time is around 17 seconds, which is about the same as the Any2One and Any2Any channels in CSP.NET. The two remaining channels, One2One and One2Any, are somewhat faster in CSP.NET. It is impossible to tell if the differences in performance are due to the implementation of CSP.NET and JCSP, or if they are caused by differences between Java and .NET. 4.2. Performance Overhead To measure the performance overhead incurred by using multiple threads on a uni-processor machine we have run a series of tests based on the Monte Carlo Pi algorithm. Monte Carlo Pi is well suited to measure performance overhead when going from sequential execution to concurrent execution because the algorithm can be divided into as many, almost independent parts, as there are threads of execution. Thus in an ideal world, with zero time context switches, the sequential and the parallel version should run equally fast, and the speedup using multiple processors should be linear or better - taking the increased number of cache hits into consideration. Listing 7 shows the sequential version and listing 8 shows the parallel version. We have run two different Monte Carlo Pi tests. The first one runs Monte Carlo Pi for a fixed number of iterations, with varying numbers of threads - from 1 to 600. To fit 600 threads into memory we use a thread stack size of around 200 kilobytes. Table 2 shows the rather surprising results. The concurrent version is consistently marginally faster than the sequential version, that runs in 112242 milliseconds, indicating that running a CSP.NET program with quite a number of threads on an idle machine is quite feasible. We should add that the machine is much more responsive when running the sequential test, meaning that machines that has to do other work than just computing Monte Carlo Pi might benefit from less threads being run. It seems that the relatively long run time of the test program hides the overhead of thread creation and context switching. To better illustrate that the sequential version of Monte Carlo Pi is much faster at low iteration counts we have done a comparison shown in table 3. Here both the sequential and the parallel version of Monte Carlo Pi have been run at varying iteration counts. The parallel version has the number of threads fixed at 400 in all runs.
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET Iterations 1,200,000,000 1,200,000,000 1,200,000,000 1,200,000,000 1,200,000,000 1,200,000,000 1,200,000,000
Number of threads 1 50 100 200 300 400 600
25
Par Time 109371 110121 109989 110092 110579 110889 111480
Table 2. Monte Carlo Pi with fixed number of iterations and variable number of threads. All times are in milliseconds. Sequential time: 112242 milliseconds. Iterations 400 4000 40000 400000 4000000 40000000 400000000
Par Time 225 224 225 262 595 3998 37062
Seq Time 1 1 5 42 378 3798 37258
Table 3. Parallel Monte Carlo Pi versus sequential Monte Carlo Pi at different iteration counts. Parallel version fixed at 400 threads. All times are in milliseconds.
Looking at table 3 it is evident that the cost of creating the threads completely dominates when the number of iterations is low. That makes perfect sense as creating one thread to do one simple calculation is a complete waste of time. But the numbers also confirm the fact that when the workload of each thread increases the overhead of thread creation will eventually become almost invisible. 4.3. Distributed Performance To illustrate the performance overhead of going distributed, we have extended our Monte Carlo Pi test to run across two machines. The work is divided into two parts of equal size. One part is sent to a second machine for processing and the other part computed locally. When computation is done on the remote machine the data is sent back and used in the calculation of Pi. Table 4 shows the results of the distributed version compared to the sequential version of Monte Carlo Pi. We varied the number of iterations while keeping a the number of threads fixed at 10 on each machine. Iterations 80 800 8000 80000 800000 8000000 80000000 800000000
Dist Time 497 782 563 817 953 1214 4279 38619
Seq Time 1 1 2 8 53 745 7281 73117
Table 4. Distributed Monte Carlo Pi with variable number of iterations. Each machine in the distributed test uses 10 threads. All times are in milliseconds.
26
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
It is not surprising that the sequential version of Monte Carlo Pi is vastly superior when the number of iterations in each of the 20 processes is low. The overhead of remoting and network communication clearly dominates at low iteration counts, but the time is still well below one second. When the time exceeds a couple of seconds the distributed version comes into its own and is much faster than the sequential version. A speedup of 2 is almost achieved and even though the theoretical maximum speedup is unknown, the performance gain is significant. We have included the code for the distributed Monte Carlo Pi in the appendix to show an example of a distributed CSP.NET program that doesn’t use workerpools and DistParallel. The program running on the server side is shown in listing 9 and the client program is shown in listing 10. 5. Conclusions CSP.NET is a new implementation of the CSP paradigm suitable for both distributed-memory multicomputers and shared memory multiprocessor systems. A lot of functionality is provided in the library but some work remain e.g. robust error handling. Future developments include channel poisoning known from C++CSP [3] and JCSP [1], user definable Name Servers and of course further work needs to done on the workerpool. DistParallel could also be extended to provide exactly the same methods and functionality as the normal Parallel class, making the boundary between distributed applications and local applications disappear completely. A thorough benchmark comparing CSP.NET to other libraries and paradigms would also be a good idea. We hope that CSP.NET will introduce new programmers to the CSP paradigm and advocate CSP as the right choice for parallel and concurrent programming in Microsoft.NET. The library will be available on the website www.cspdotnet.com. References [1] Alastair R. Allen and Bernhard Sputh. JCSP-Poison: Safe Termination of CSP Process Networks. In Communicating Process Architectures 2005, pages 71–107. IOS Press, Amsterdam, Sept 2005. [2] N.C.C. Brown. C++CSP Networked. In I.R. East, D. Duce, M. Green, J.M.R. Martin, and P.H. Welch, editors, Communicating Process Architectures 2004, pages 185–200. IOS Press, Amsterdam, 2004. [3] N.C.C. Brown and P.H. Welch. An Introduction to the Kent C++CSP Library. In J.F. Broenink and G.H. Hilderink, editors, Communicating Process Architectures 2003, pages 139–156. IOS Press, Amsterdam, 2003. [4] Communicating Sequential Processes for Java. www.cs.kent.ac.uk/projects/ofa/jcsp/. [5] C. A. R. Hoare. Communicating sequential processes. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1985. [6] P.H.Welch, J.R.Aldous, and J.Foster. Csp networking for java (jcsp.net). In P.M.A.Sloot, C.J.K.Tan, J.J.Dongarra, and A.G.Hoekstra, editors, Computational Science - ICCS 2002, volume 2330 of Lecture Notes in Computer Science, pages 695–708. Springer-Verlag, April 2002. [7] Nan C. Schaller, Gerald H. Hilderink, and Peter H. Welch. Using Java for Parallel Computing - JCSP versus CTJ. In Peter H. Welch and Andr`e W. P. Bakkers, editors, Communicating Process Architectures 2000, pages 205–226. IOS Press, Amsterdam, Sept 2000. [8] Peter H. Welch and Brian Vinter. Cluster Computing and JCSP Networking. In James Pascoe, Roger Loader, and Vaidy Sunderam, editors, Communicating Process Architectures 2002, pages 203–222. IOS Press, Amsterdam, Sept 2002. [9] P.H. Welch and D.C. Wood. The Kent Retargetable occam Compiler. In Brian O’Neill, editor, Parallel Processing Developments, Proceedings of WoTUG 19, volume 47 of Concurrent Systems Engineering, pages 143–166, Amsterdam, The Netherlands, 1996. World occam and Transputer User Group, IOS Press. ISBN: 90-5199-261-0. [10] Barry Wilkinson and Michael Allen. Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2004.
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
A. Monte Carlo Pi Source Code A.1. Sequential Monte Carlo Pi private static void CalcMonteCarloPiSeq() { double x, y, area; int pi = 0; int i; Random r = new Random(); for (i = 0; i < Max; i++) { x = r.NextDouble() ∗ 2.0 − 1.0; y = r.NextDouble() ∗ 2.0 − 1.0; if ((x ∗ x + y ∗ y) < 1) pi++; } area = 4.0 ∗ (double)pi / (double)Max; Console.WriteLine("Seq Area: pi/Max " + pi + "/" + Max + " = " + area); }
Listing 7. Sequential Monte Carlo Pi
A.2. Parallel Monte Carlo Pi public class Worker : ICSProcess { long iters; int pi; Random r; Barrier ba; public Worker(long iterNum, int seed) { iters = iterNum; pi = 0; r = new Random(seed); } public Worker(long iterNum, int seed, Barrier b) { iters = iterNum; pi = 0; r = new Random(seed); ba = b; } public int getPI() { return pi; } public void Run() { double x, y; if(ba != null) ba.Sync(); for (int i = 0; i < iters; i++) { x = r.NextDouble() ∗ 2.0 − 1.0; y = r.NextDouble() ∗ 2.0 − 1.0;
27
28
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET if ((x ∗ x + y ∗ y) < 1) pi++; }
}
}
private static void CalcMonteCarloPiPar(int num) { Worker[] processes = new Worker[num]; for (int i = 0; i < num; i++) processes[i] = new Worker(Max / num, 115 + i ∗ 10); new Parallel(processes).Run(); int pi = 0; for (int i = 0; i < num; i++) pi += processes[i].getPI(); double area = 4.0 ∗ (double)pi / (double)Max; Console.WriteLine("Par Area: pi/Max " + pi + "/" + Max + " = " + area); }
Listing 8. Parallel Monte Carlo Pi
A.3. Distributed Monte Carlo Pi – Server [Serializable] public class DistControl : ICSProcess { int numThreads, pi; long iterNumPerWorker; string channelName; public DistControl(string name, int threads, long iterPerWorker) { numThreads = threads; iterNumPerWorker = iterPerWorker; channelName = name; } public void Run() { Worker[] processes = new Worker[numThreads]; for (int i = 0; i < numThreads; i++) processes[i] = new Worker(iterNumPerWorker, 1763 + i ∗ 10); new Parallel(processes).Run(); for (int i = 0; i < numThreads; i++) pi += processes[i].getPI(); IChannelOut resultChannel = Factory.GetOne2OneOut(channelName); resultChannel.Write(pi); }
}
public class LocalDistControl : ICSProcess { string resultChannel; int numThreads, pi; long iterNumPerWorker; Barrier ba;
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
public LocalDistControl(string name, int threads, long iter, Barrier b) { resultChannel = name; numThreads = threads; iterNumPerWorker = iter; ba = b; } public void Run() { IChannelOut work = Factory.GetOne2OneOut("workchannel"); DistControl dc = new DistControl(resultChannel, numThreads, iterNumPerWorker); work.Write(dc); ba.Sync(); IChannelIn result = Factory.GetOne2OneIn(resultChannel); pi = result.Read(); }
}
public int getPI() { return pi; }
private static void CalcMonteCarloPiDist(int numlocal, int numdist) { Worker[] processes = new Worker[numlocal]; Barrier b = new Barrier(numlocal + 1); long localWork = Max / 2; for (int i = 0; i < numlocal; i++) processes[i] = new Worker(localWork / numlocal,115 + i ∗ 10,b); Parallel p = new Parallel(); long iterPerWorker = (Max−localWork)/numdist; LocalDistControl dc = new LocalDistControl("resultChannel5", numdist, iterPerWorker,b); p.AddProcess(dc); p.AddProcess(processes); p.Run(); int pi = 0; for (int i = 0; i < numlocal; i++) pi += processes[i].getPI(); pi += dc.getPI(); double area = 4.0 ∗ (double)pi / (double)Max; Console.WriteLine("DistPar Area: pi/Max " + pi + "/" + Max + " = " + area); }
Listing 9. Distributed Monte Carlo Pi – server side
A.4. Distributed Monte Carlo Pi – Client public class runTest : ICSProcess { string channelName; public runTest(string name) { channelName = name; }
29
30
A.A. Lehmberg and M.N. Olsen / An Introduction to CSP.NET
}
public void Run() { IChannelIn work = Factory.GetOne2OneIn(channelName); ICSProcess p = work.Read(); p.Run(); Console.WriteLine("Work done"); }
public class Program { static void Main(string[] args) { CspManager.Init("9097", "9090","192.0.0.1","192.0.0.2"); Console.WriteLine("after Init"); new Parallel(new ICSProcess[] { new runTest("workchannel")}).Run(); Console.ReadKey(); }
}
Listing 10. Distributed Monte Carlo Pi – client side
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
31
Performance Evaluation of JCSP Micro Edition: JCSPme Kevin CHALMERS, Jon KERRIDGE and Imed ROMDHANI School of Computing, Napier University, Edinburgh, EH10 5DT {k.chalmers, j.kerridge, i.romdhani}@napier.ac.uk Abstract. Java has become a development platform that has migrated from its initial focus for small form devices, to large full scale desktop and server applications and finally back to the small in the form of Java enabled mobile phones. Here we discuss the necessary requirements to convert the existing JCSP framework so that it can be used in these resource constrained systems. We also provide some performance comparisons on various platforms to evaluate this implementation. Keywords. JCSP, Java 2 Micro Edition, Mobile Devices, Real-time systems.
Introduction Java, as both a platform and a development language, has on occasion been examined as a possible tool for the development of real-time systems. Evaluations of such implementations range from the almost favorable [1], to the total dismissal [2] of Java as a potential real-time solution, but this has not deterred implementations of Java in embedded systems. Mobile phones especially, in the form of Java MIDlet capabilities [3], have shown that while not the best solution, the portability and popularity of the language has pushed it into the real-time domain. Here we do not consider Java’s properties as a real-time platform as this falls outside the main focus of the work presented. Such features as a more accurate system timer and reduction of the impact of garbage collection are not addressed, the main reason being that many mobile devices have a fixed Java implementation that cannot be replaced with a more real-time friendly one. The main aim and motivation here is to provide a first initial attempt at modifying the JCSP framework to work on a mobile or embedded system device, as well as providing some ideas on how to scale back up the resulting package to provide something closer to the full scale implementation. JCSP has recently been released as open-source (www.jcsp.org), and when coupled with the expanding embedded Java market it becomes possible and desirable to implement a version of JCSP [4, 5] that can operate on systems that contain such small resource footprints as those experienced in common mobile phones. JCSP itself can find its roots in Communicating Threads for Java (CTJ) [6] which has been examined in a real-time environment [7], so initially it may appear that implementing such a system would be fairly simple. This is not the case however, as there are many limitations in a Java 2 Micro Edition (J2ME) environment when it is compared to the full scale Java 2 Standard Edition (J2SE) available on desktop machines. The necessary steps to convert JCSP to operate in this much reduced platform are covered in Section 2, and are really more a case of removing features rather than modifying code. First of all, in Section 1 we will examine current mobile device technology and how the various versions of J2ME sit within them. Section 4 presents the results of the CommsTime test on various different virtual machine implementations to provide a first evaluation as to how well JCSP will
32
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
operate in resource constrained devices. Two different phone platforms are assessed, one of which a common PDA / mobile phone cross over and they other a common mobile phone device, and this helps us to determine the efficiency of the underlying thread model of each implementation. They are important as they can be used to determine the context switch time within a mobile device, and how it compares to that of a full scale desktop machine. Also, in Section 3 we show how small a memory footprint we can actually achieve by removing various unnecessary classes. 1. Mobile Devices When the term mobile device is used, most people think primarily of mobile phones as opposed to anything else. This should not always be the case, and more powerful form factors such as Personal Digital Assistants (PDA) also fall under this category. Digital cameras and MP3 players can also be considered a form of mobile device; wireless network capabilities now being available for digital cameras [8]. The boundary between these is becoming vaguer however. Cameras are being integrated into mobile phones as well as other capabilities, leading to the evolution of the smart-phone [9] – PDA and phone combined. The smart-phone is not only used to make telephone calls, but to play games, send and receive email, browse the Internet – as well as having the integrated digital camera. Considering the capabilities of most modern smart phones, it is not uncommon for a device to have a 400 MHz processor and at least 64 Mbytes of RAM. Adding a small sized memory card of 2 Gbytes costs virtually nothing, and provides a large amount of storage space in a very portable device. This specification is easily up to the standard of the common desktop around 6-7 years ago. The real motivator behind mobile technology is wireless communication technologies, be it Wi-Fi, Bluetooth, or GPRS mobile phone networks. The original mobile phone transmission technology of GSM has been upgraded to 3G communication technologies, allowing fast connection speeds on the move as well as in a wireless hotspot. Work is also being carried out to take us further beyond 3G. 1.1 Flavours of J2ME Of course, mobile devices are much more resource constrained than a full desktop system, where GHz of processor and Gbytes of memory are more the norm than the MHz and Mbytes of small factor devices. Utilizing existing development platforms as is becomes difficult, and therefore Sun have provided a specification for mobile and embedded devices, Java 2 Micro Edition. J2ME comes in a variety of different platforms, each aimed at various applications and device configurations [10]. Figure 1 illustrates the various Java platforms and how they relate to one another [11]. The two platforms on the left are those aimed at the more traditional desktop and server systems – Java 2 Standard Edition (J2SE) and Java 2 Enterprise Edition (J2EE) respectively. The two platforms on the right are the various different incarnations of J2ME. The Connected Device Configuration (CDC) is aimed at higher end devices, such as smart phones and set top boxes. Its various profiles – Foundation, Personal Basis and Personal – add further packages and functionality upon its predecessor. The Personal Profile actually gives a platform comparable to J2SE 1.3.1, with the various deprecated methods removed to condense the final size of the architecture. The lowest level Java illustrated here (there is also Java Card which is even smaller than this) is the Connected Limited Device Configuration (CLDC), which is aimed at
33
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
mobile phones and embedded systems, ones with less than 512 Kbytes of memory [10]. This is a very different platform than the one found in J2SE. One of the major differences is the virtual machine that runs Java on small devices. KVM stands for Kilobyte Virtual Machine, and is a significantly cut down version of a standard Java VM. A KVM is often installed on a chip within a device [3], as opposed to the software based JVM on the desktop PC. Optional Packages Optional Packages
Å Java 2 Enterprise Edition
J2ME
Æ
Personal Profile Java 2 Standard Edition
Personal Basis Profile Foundation Profile
MIDP
CDC
CLDC
Java Virtual Machine
KVM
Figure 1. Java Platforms
The Mobile Information Device Profile (MIDP) lifts the basic, console based, CLDC platform allowing graphical user interfaces and other richer media content. Aimed specifically at the mobile phone market, it has now become uncommon to find a phone that does not implement this profile in some manner. As different device manufacturers implement the profile in different ways, Sun Microsystems has only provided guidance as to what this specification should include. As such, it is not uncommon to find different phones displaying slightly different interfaces for the same MIDlet; or even not running a MIDlet that another device can. MIDlets are also packaged up into a Java Archive (JAR) file for distribution. This means that all developed class files must be included within the application itself. There is no such property as a class path within a MIDP environment, meaning that the same class may be replicated across several different application JAR files, with no method of sharing between them. CLDC and MIDP do not utilize the Java Native Interface (JNI) so actually updating the standard thread model in Java for a more efficient approach is not possible. This is itself justified as the aim is to have a package that can be used on a multitude of mobile devices and embedded systems without having to recompile device specific code for each, an approach more suitable for a C++ CSP approach. Combined with the possibility of embedded Java chips within a mobile phone, this upgrading possibility is not the best option. Some mobile phones also have very limited available SDK’s for developing specific libraries, and many do not even have this possibility, being hardware based only.
34
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
Because of this deep relationship between Java on mobile phones (MIDP) and Java on other embedded systems (CLDC), it becomes only necessary to initially aim a micro edition of JCSP at the CLDC configuration. 2. Converting JCSP Before converting JCSP to operate on J2ME, a first analysis as to the general package differences between CLDC and J2SE needs to be undertaken. An investigation of the specification of CLDC shows a number of changes that will need to be made to JCSP to allow it to first compile, and only these are discussed here. Many other differences exist between CLDC and J2SE but these are not relevant to this discussion. 2.1 Removal of Certain Packages The first package that can be removed from JCSP for the micro edition is the active components in the jcsp.awt package. As previously mentioned CLDC is console based only, and although MIDP does provide user interface components these are in no way related to the basic AWT components of J2SE. Thus, it would be necessary to create a new range of active components for MIDP, which at the moment is left for future work. The second major package removal relates to the network functionality provided with JCSP Network Edition. CLDC uses a completely different technique for creating underlying connections that relies on a governing factory class called Connection. Specifically, there is no such usable object as Socket and ServerSocket, meaning that the capabilities of the jcsp.net package can not be compiled under CLDC in its current form. There are other reasons for removing this package also, which we will explore in Section 2.3. For now however, the different communication model is enough justification for its removal. Two other packages, jcsp.win32 and jcsp.demos, are also removed. These are not required parts of the overall framework, jcsp.win32 being platform specific and jcsp.demos being demonstration programs some of which require the already removed jcsp.net and jcsp.awt packages. 2.2 Differences in the Thread Model Another major difference when trying to implement the JCSP framework is the thread models implemented in J2ME when compared to J2SE. Obviously CLDC will have some form of cut down thread model, but of particular importance is the removal of background (or in Java terms Daemon) threads. Within JCSP the underlying threads of the processes in a parallel construct are background threads, so for the CLDC implementation of JCSP this feature needs to be changed. Therefore a call to exit the system (System.exit(0)) must be made whenever the main process has ended in case any residual threads are still in operation. As mentioned earlier, deprecated methods are removed from J2ME implementations, so such (dangerous) capabilities as pause and resume are not available. These methods were never used in the underlying functionality of JCSP, but some other functionality has been removed in CLDC. The method to set the name of a thread in particular is no longer available, and was used in the underlying threads in JCSP. However, this value can be set in the constructor of the Thread object, and this has been done instead. Another omission is the ThreadGroup object, which is used in JCSP to determine the priority of processes within a PriParallel structure. Instead, we use the Thread object.
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
35
2.3 Unavailable Interfaces A number of interfaces are no longer available in CLDC. Of these, two are used extensively within the remaining JCSP packages and classes. The Serializable interface allows objects to be converted into bytes for either storage or transfer across a communications medium. This interface is a central feature required for the jcsp.net package, another reason for its removal. As well as this, a high proportion of the remaining classes implement Serializable for this very reason of network transference. These classes have been altered to remove the use of this interface. The second interface of note is Cloneable. This interface allows deep copies (i.e. full copies, not references) of objects to be made by calling a clone method; although clone itself may simply return a reference to the actual object if so desired, thereby only providing a shallow copy. CLDC does not support object cloning – there being no Object.clone() method that all objects can use. Cloning is used by the various buffers in the jcsp.util package. The decision to take here is whether to remove the Buffered Channels from JCSP completely (along with the jcsp.util package) or to create a Cloneable interface of our own – this latter method only requiring a clone method to be declared within a new interface. The second choice has been taken in this circumstance as it is fairly simple, and allows the useful functionality of Buffered Channels to remain. 2.4 Reduced Collections Framework The Collections Framework in Java defines a number of data structures that can be used to store objects for later retrieval. J2SE has a large range of varying collection types, from Array Lists (essentially a dynamic array) to Hash Maps. CLDC obviously cannot support many stored objects due to its smaller size and generally smaller amount of available resources. In particular, the availability of the various hashed data structures has been removed. The Hash Table was used within JCSP to store various objects (threads in fact) in the underlying parallel structures, so this data structure has been replaced by a Vector (another simple dynamic array type). This does mean that it is harder to obtain a specific thread in the parallel object, but as less threads should (in theory) be running, this will hopefully not be a major impact on performance. 2.5 No Number Class The final necessary changes involve the plug and play components. Some of these utilized Number objects (the parent object of the Integer, Float, Double, etc, primitive data wrapper objects). CLDC does not support this class, so Integer is used instead. This is justified in that the plug and play components provided with JCSP converted the Number object into an Integer object before using them anyway.
3. Reducing the Memory Footprint Once all the previously described alterations have been made, the remaining JCSP packages compile quite easily, resulting in a set of classes that can be used on a CLDC based platform, such as a Java enabled mobile phone. The final package size is still quite large however, around 163 Kbytes on a device that is meant to have less than 512 Kbytes. On these more memory constrained devices this is obviously going to become a problem, but within a MIDP environment this can be much worse. As previously mentioned, a MIDlet requires all the necessary classes to be packaged together into a single JAR file – there
36
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
being no such property as a class path. If two JCSP enabled MIDlets are on the same mobile device, then the JCSP package is effectively replicated twice – once for each MIDlet – thereby taking up over 300 Kbytes of memory. Therefore it can be surmised that removing classes that are not always used becomes favorable. These classes can then be added to any implementation that requires them. 3.1 Removing Classes Specifically, there are four groups of classes that can be removed from the framework and still provide the main functionality expected in JCSP. These four groups are: x
x
x
x
Plug and Play Components – A whole package can be removed in the form of Plug and Play. The classes in this package, although useful, are not required for every application, and can be added as required. This could be viewed as quite a major loss, but if included as an optional addition then this is not entirely the case. Connections – Connections hide a two channel structure (input and output) from the developer. The basic premise is to provide a method that appears very similar to a client-server connection over a network. Rejectable Channels – These allow input ends to reject messages sent over a channel. These are usually used extensively in the jcsp.net package already removed. Call Channels – These channels provide synchronising method calls between processes (over which data may flow in both directions). Both processes must agree to the call – in the same way as both processes must agree to communicate on a channel. The accepting process may wait for it as part of an Alternative. They simplify many kinds of transaction that would otherwise need a sequence of normal channel communications and they have the potential for increased efficiency. However, distributed versions are not yet available and, for now, they are removed. [Note: Call Channels should not be confused with synchronized method calls in standard Java.]
With these classes removed, the size of JCSPme is around 90.3 Kbytes, which is a significant reduction in size from the 163 Kbytes of the first compilation, and a dramatic decrease in size from the 1.06 Mbytes of the full JCSP package. This could possibly be reduced further if other classes were removed, such as Buffered Channels and primitive integer channels. For now however, we will leave the remaining classes in place. 4. Testing Most recently, comparisons between various CSP implementations have been performed using the CommsTime test [12]. This involves four basic processes – prefix, delta, successor and a custom consumer – connected together to create the Natural numbers in sequence. The time taken to produce a number is measured to provide an idea of the context switch time on each platform. In 2003, using a 667 MHz Celeron processor with 256 Mbytes of RAM, JCSP running under Java 1.4 recorded a time of 230 microseconds to produce a number (occam scored a far more impressive 1.3 microseconds). To test the Micro Edition of JCSP, two different platforms were used. Firstly a PDA/Smart Phone running Windows Compact Edition 4.2 with a 416 MHz Intel PXA272 processor and 64 Mbytes of memory; and secondly a Sony Ericsson S700i mobile phone. The smart phone was capable of utilizing a number of different virtual machines, as well as a .NET
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
37
implementation of CSP currently under development. The results are presented in Figure 2. A standard desktop machine running normal JCSP is given as comparison, as well as the smart phone running a CDC Personal Profile with full JCSP support. As these results show, the CommsTime on such a reduced platform is still quite reasonable when compared to the Celeron 667 MHz results. Something to point out here is that although the Smart Phone has a 416 MHz processor, in general it tries never to operate at such a speed to preserve battery lifetime. The two IBM JVMs have differing results, but this is probably due to the MIDP smaller footprint and better efficiency thereof. 1400 1200
Time μs
1000 800 600 400 200 0 S700i
IBM J9 IBM J9 CDC MIDP PP
Intent MIDP
.NET Celeron P4 3.0 CF 2.0 667 GHz
Figure 2. CommsTime Results
The Intent VM is common on many Windows CE based smart phones on the market, and is claimed to be the fastest such VM available for smart phones (http://taogroup.com), apparently a true statement looking at these results. The .NET result is, however, surprisingly better than any of the Java platforms. The most likely explanation for this is Microsoft’s knowledge of the underlying architecture of the device and its operating system, something unlikely shared by IBM and Intent. This is interesting in that it shows that there is possible room for improvement within the Java implementations if more information can be gathered as to how the threading system is handled on Windows CE devices. A question that could be asked is how much does a thread cost within CLDC and MIDP? This is difficult to answer, and the likely results will not be comparable as each VM implementation is specific. Memory allocated to the virtual machine can vary from device to device, and how to compare a MIDP implementation built upon a chip with one implemented in software is questionable. The results initially gathered here give us a good idea as to how likely a multithreaded application will perform on a small scale device. 5. Future Work and Conclusions Much further work can be undertaken to extend JCSPme to provide functionality closer to the full package. Of particular interest is an Active MIDlet UI and providing networked capabilities. By doing this, a definite move into the mobile device market area can be made using JCSP, providing high level abstractions to develop complex systems.
38
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
5.1 Active UI for MIDlets As JCSP has an AWT package, and work has been done on a Swing equivalent, it seems feasible that a collection of active components for MIDlets be achieved. MIDP provides a Liquid Crystal Display User Interface (LCDUI) API as an alternative to AWT, and it is a very simple set of very basic components. Developing these might actually overcome problems in developing MIDlet applications in general, which have a very different development model to standard applications, having more in common with the Java applet model. 5.2 Persisting Objects The lack of the Serializable interface is a major hindrance to the inclusion of network functionality to JCSPme, although it may be overcome by utilizing object persistence methods instead. These generally require the developer to determine how an object should be converted into an array of bytes. This is different from the methods generally used in Java; there are no such objects as object streams and object inputs and outputs. As the basic types (String, int, etc) can easily be converted into byte arrays, the belief is that any other object should also be, as long as all its component parts are basic types or can be persisted themselves. This would require any object requiring persistence to implement their own persist and restore methods, unlike object serialization which is hidden. This will not always be the case as even in standard Java some objects contain elements that are not of a primitive type (such as a pointer to a database record). 5.3 Networked Channels Networked channels can quite easily be developed as is, if primitive data types are used instead of objects. By using the Connection class and creating TCP/IP connections, architecture not unlike standard JCSP can be developed. If a method of persistence is also incorporated then it will be possible to send full scale Java objects across these networked channels. This would take the JCSPme platform a long way to becoming a framework for developing distributed systems on mobile phones. The same can not be said for a mobile process based system at present, as CLDC does not provide any method to load classes dynamically, due to resource constraints. However, as the platform matures it is likely that some form of customizable class loading system will emerge. 5.4 Gaming A possible use for JCSPme is the development of games using a different approach to standard development. MIDP does provide a game API unlike other Java incarnations, javax.microedition.lcdui.game. Game engines usually follow a basic game loop [13] of get user input, process logic, update elements and update display. Taking a process and channel based approach to this may involve processes acting like game elements and thereby processing their own logic internally and sending messages to a centralized game engine process which sends back any necessary status updates. Rendering to a display has already been implemented in the full scale JCSP package in the form of Active Components, particularly ActiveCanvas. This is a definite area of interest as a possible simplification of game design.
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition
39
5.5 Conclusions The results gathered for CommsTime, coupled with the ever expanding mobile phone market, show that mobile devices are becoming a worthy area of research. When considering the possibilities of Ubiquitous Computing [14] and the nature of the mobility of such environments, JCSP can be seen as a possible solution to some of the problems that must be overcome for the software infrastructure of Ubiquitous Computing. Indeed, the ʌcalculus [15] has been mentioned in the literature [16], and JCSP provides us with the basic functionality of mobile processes and mobile channels in a multi-platform environment, although some work still needs to be carried out. These performance results may be improved on even further by trying to refine the behavior of various components in the framework to make them more suitable for the constrained resources, and testing on other mobile phone platforms needs to be carried out to determine the overall efficiency of any changes made. Other tests need to be carried out (Stressed Alt for example) to stress test the overall capabilities of the J2ME on these various platforms, and really determine how well JCSPme can operate. Testing must also be carried out on various real-time hardware boards, and as Java is used more and more to teach the basics of real-time software engineering, JCSPme could be used to demonstrate certain concepts of concurrency. As mobile phones progress to becoming smart phones, providing more resources to the developer, the future of MIDP becomes unclear. Much investment has been made using MIDP as a gaming platform for mobile phones, with various companies selling games over the Internet or by other means such as text messaging. To say MIDP will disappear is probably not justified completely, but as phones become more powerful the likelihood is that more extensive Java platforms, such as CDC, will become more prominent and MIDP will disappear into the background; or evolve and merge into the higher specification implementations. This would allow the full use of the JCSP package as is, and IBM already provides JVMs for a number of mobile device platforms. This may make JCSPme unnecessary for the mobile phone market, but as Java is pushed more and more as a possible real-time platform, JCSPme may find a home in the embedded systems market. References [1]
[2] [3] [4]
[5]
[6]
[7]
[8]
A. Corsaro and D. C. Schmidt, “Evaluating Real-time Java Features and Performance for Real-time Embedded Systems,” presented at the Eighth IEEE Real-Time and Embedded Technology and Applications Symposium, 2002. K. Nilsen, “Adding Real-Time Capabilities to Java,” Communications of the ACM, 41(6), pp. 49-56, 1998. G. Lawton, “Moving Java into Mobile Phones,” IEEE Computer, 35(6), pp. 17-20, 2002. P. H. Welch and J. M. R. Martin, “A CSP Model for Java Multithreading,” in P. Nixon and I. Ritchie (Eds.), Software Engineering for Parallel and Distributed Systems, pp. 114-122. IEEE Computer Society Press, June 2000. P. H. Welch, J. R. Aldous, and J. Foster, “CSP Networking for Java (JCSP.net),” in P. M. A. Sloot, C. J. Kenneth Tan, J. J. Dongarra, and A. G. Hoekstra (Eds.), Proceedings of Computational Science – ICCS 2002, Lecture Notes in Computer Science 2330, pp. 695-708. Springer Berlin / Heidelberg, 2002. G. H. Hilderink, J. F. Broenink, W. Vervoort, and A. W. P. Bakkers, “Communicating Java Threads,” in A. W. P. Bakkers (Ed.), Proceedings of WoTUG-20: Parallel Programming and Java, IOS Press, Amsterdam, The Netherlands, 1997. G. H. Hilderink, J. F. Broenink, and A. W. P. Bakkers, “A Distributed Real Time Java System Based on CSP,” in B. M. Cook (Ed.), Proceedings of WoTUG-22: Architectures, Languages and Techniques for Concurrent Systems, IOS Press, Amsterdam, The Netherlands, 1999. G. D. Hunt and K. I. Farkas, “New Products,” IEEE Pervasive Computing, 4(2), pp. 10-13, 2005.
40 [9] [10]
[11] [12]
[13] [14] [15] [16]
K. Chalmers et al. / Performance Evaluation of JCSP Micro Edition S. J. Vaughan-Nichols, “OSs Battle Smart in the Smart Phone Market,” IEEE Computer, 36(6), pp. 1012, 2003. J. White, “An Introduction to Java 2 Micro Edition (J2ME); Java in Small Things,” in International Conference on Software Engineering, Proceedings of the 23rd International Conference on Software Engineering, pp. 724-725. IEEE Computer Society. “CDC: An Application Framework for Personal Mobile Devices,” Sun Microsystems Inc. June 2003. Available at http://java.sun.com/j2me. N. C. Brown and P. H. Welch, “An Introduction to the Kent C++CSP Library,” in J.F. Broenink and G. H. Hilderink (Eds.), Communicating Process Architectures 2003 (WoTUG-26). IOS Press, Amsterdam, The Netherlands, 2003. D. Clingman, S. Kendall, S. Mesdaghi, Practical Java Game Programming, p. 25. Charles River Media, Inc., Hinghan, MA. 2004. M. Weiser, “The Computer for the 21st Century,” in Scientific American, pp. 94-104. September 1991. R. Milner, Communicating and Mobile Systems: The ʌ-Calculus. Cambridge University Press, 1999. T. Kindberg and A. Fox, “Systems Software for Ubiquitous Computing,” IEEE Pervasive Computing, 1(1), pp. 70-81, 2002.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
41
Ubiquitous Access to Site Specific Services by Mobile Devices: the Process View Jon KERRIDGE and Kevin CHALMERS Napier University, Edinburgh, EH10 5DT, Scotland {j.kerridge, k.chalmers}@napier.ac.uk Abstract. The increasing availability of tri-band mobile devices with mobile phone, wi-fi and Bluetooth capability means that the opportunities for increased access by mobile devices to services provided within a smaller locality becomes feasible. This increase in availability might, however, be tempered by users switching off their devices as they are overloaded with a multitude of messages from a variety of sources. A wide range of opportunities can be realised if we can provide a managed environment in which people can access wireless services specific to a particular physical site or location in a ubiquitous manner, independent of the service, and they can also choose from which services they are willing to receive messages. These opportunities range from retail promotions as a person walks down the street, to shopper specific offers as people enter stores that utilise reward card systems, to information about bus arrivals at a bus stop, additional curatorial information within a museum and access to health records within a hospital environment. The CPA paradigm offers a real opportunity to provide such capability with mobile processes, rather than the current approach that, typically, gives users access to web pages.
Introduction and Motivation The JCSP framework [1, 2] together with the jcsp.mobile package [3] provides the underlying capability that permits the construction of a ubiquitous access environment to services that are provided at the wi-fi and Bluetooth ranges of 100m and 10m respectively. Service providers will make available a universal access channel that permits the initial interaction between the person’s mobile device and the service provider’s server at a specific site or physical location. The mobile device will contain only a single simple access process that will download client processes from any service provider offering this type of service. These client processes will then interact with the service provider’s server to achieve some desired outcome of benefit to both the user and the service provider. Regardless of the service provider, the same simple access process residing in the mobile device will remain unaltered. The access method will enable a mobile device to manage the interaction between several service providers at the same time. Once an interaction has finished, the resources used within the mobile device will be automatically recovered for reuse. Location and context determination is no problem because that is determined solely by the user’s mobile device detecting and receiving a signal from a service provider. In order that a person can personalize the services with which they are willing to interact, the mobile device should also contain a preferences document describing the types of interaction in which the user is willing to participate. The user can modify this document. Thus a user can ensure that only services in which they are interested will cause an interaction between the mobile device and a service provider’s server. The mechanism is thus one into which a user opts-in, rather than having to opt-out of either specific services or services specified in some generic manner.
42
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
The benefits of this style of interaction are that service providers will be able to create services that are specific to their organisation. It is thus possible to make this into a personal service rather than current technology, which tends to broadcast relatively unspecific data to a wide number of people. The proposed method means users can access many different services with only one universal and ubiquitous access mechanism that could be installed by the mobile device manufacturer. The mechanism is analogous to that employed in coastal VHF radio communication between ships and the coast guards. Mariners can call the coast guard using a previously defined listening channel. The coast guard then informs the mariner to switch to a specific channel upon which they can continue their conversation privately. In this proposal the means by which a communication between mobile device and any server is started is identical, universal and independent of the communications technology. The server then transfers a specific client to the mobile device that makes that interaction unique. The UASSS (Ubiquitous Access to Site Specific Services) concept is applicable to many and varied applications of which a non-exhaustive list includes: x
x x x
x
x
Supermarkets could make special offers to customers as they walk into the store based upon the previous spending habits of the shopper and current stock availability within the store. Shoppers could be informed of both price reductions specific to them but of new lines in which they might be interested. A person walking down the street could be informed that the latest edition of a magazine they regularly buy is now available at the newsagent they are passing. A museum or art gallery visitor could be given information about the objects, at various levels of detail, in their chosen language as they move round the displays. Within the home it could be used to replace all remote controls by a single device that downloads each product’s controller as required. More importantly devices could download new controllers from the Internet automatically and these would be made available without any user intervention. Intelligent devices such as digital camera systems that detect new photographs have been taken that can be automatically downloaded onto the home PC become feasible as the person walks into their home. Within hospitals the UASSS could be used to access electronic patient records and other information depending upon the location and role of the person accessing the hospital’s systems. On entering a bus stop a person could be informed of the expected arrival of the next bus but only for the buses that stop at that bus stop and for which the person has indicated they normally catch this bus in their preferences document.
1. Background Much of the current work in context and location aware computing is addressing a different set of problems namely, how consistent services can be provided, securely, to a user as they move around the environment and how precise location information can be obtained. Many researchers are looking at ways of being able to confirm the specific location of a person using a number of different technologies[4, 5], for example GPS [6] and radio tags [7], then knowing the precise location route planning guidance can be given [8] assuming the system knows where the person wants to go. The UASSS does not require this level of sophistication; if an access point can be detected that provides the ubiquitous access capability then the location of the person is known sufficiently to undertake an interaction.
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
43
Workers [9, 10] have recognised the importance of a software engineering approach to building such systems and the difficulty in achieving this goal. UASSS uses a well proven software engineering approach based upon Hoare’s CSP [11], which provides a compositional methodology for building highly parallel systems of which this application provides a prime example. Others are working on access control, especially as a person moves around [12, 13] and the effect this has on power consumption. Some have addressed the problem of advertising [14] or those wishing to work in groups accessing a common repository [15], yet again not areas the UASSS needs to address. Several groups have suggested a set of scenarios, which motivate the research they have undertaken [16, 17], none of which includes any of the suggested scenarios given earlier. It has been suggested in [18] “It is a big design challenge to design personalised location awareness so that it does not require too much effort on the part of the users.” 1.1 Ubiquitous Computing “UC [Ubiquitous Computing] is fundamentally characterized by the connection of things in the world with computation” [19]. With this statement, Weiser describes the underlying fundamental principle of Ubiquitous Computing – attaching computational elements to physical objects. Research is generally focused on thinking about computers in the world, taking account of the environment and trying to remove them from our perception [20]. Or another viewpoint is the interaction between the physical and computing worlds [21]. Computers are everywhere, but this does not truly give us computing ubiquity, as a toaster with a microchip is still just a toaster. Only when the toaster can connect to other devices does ubiquity occur [19]. For example, this could enable the toaster to talk to the alarm clock, having toast ready for us when we get up in the morning, overcoming a small problem (if it can be called such) instead of addressing the large ones that computing usually aspires to. Some other descriptions tend to involve the idea of smart spaces [22], that incorporate devices within them that form a smart dust of components. However, this diverges somewhat from the idea of a device performing a task for us. Want provides a good example of how to view this [23]. When purchasing a drill, do you want a drill, or do you want to make a hole? In general, the answer will most likely be the latter, and a good tool should be removed from our awareness [24], something the computer rarely does. This is where Weiser started to develop the notion of Ubiquitous Computing, examining new methods of people relating to computers, and trying to enable computers to take the lead in this relationship. As a further level of complexity as to how to think of computer ubiquity, the European Information Society Technology research program seems to also be addressing the same areas as Ubiquitous and Pervasive Computing [25], but without using these terms explicitly. Another viewpoint taken is that of everyday computing [26], which considers the scaling of Ubiquitous Computing. What this appears to actually mean is simply adding the concept of time, as well as removing the beginning or end of a system interaction by the user and allows task interruption. It is arguable if this is actually different from UbiComp; time is an important piece of contextual information, and how to scale an idea that is meant to be incorporated into everything is another question. These ideas of task and tools are generally highly coupled. As technology evolves, so do tasks and vice versa [27]. As this occurs, Weiser puts forth that we should not be adapting to technology, rather it should be technology fitting to our lives, something that Weiser terms as Calm Technology [19]. Calmness can also be considered similar to the invisibility of computers [28].
44
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
To put Ubiquitous Computing into a better context, Kindberg [21] provides a set of examples of interactions which are either not ubiquitous, or are borderline ubiquitous. Accessing email with a laptop, or a collection of wirelessly connected laptops are (very obviously) not examples of UbiComp; a smart coffee cup, peer-to-peer games and the Internet are borderline. Weiser agrees with the analogy of the Internet as a ubiquitous entity, although the belief is that the focus should be moved from thin clients to thin servers [19], both for the Internet and other devices. In fact, Weiser considers the Internet and embedded processors as signalling the beginning of the Ubiquitous Computing era (with 8 billion embedded processor sales compared to only 15 million PC in 2000 [23], this era can surely not be too far away). Lately, research does seem to concentrate more on the idea of mobility. Weiser states that this is not all that UbiComp means, but the current literature indicates a shift towards this idea. An example of this is Want’s argument [23] that Personal Digital Assistants (PDAs) and mobile phones are the most useful devices for Ubiquitous Computing developers, although limited in the required computing capability, integration and interface requirements. Another example is Abowd [26], who claims inch-scale computing is here in the form of the PDA. Weiser describes entities smaller than this however [20], and also states the Ubiquitous Computing is addressing different ground than the PDA [24], which is merely the acquirement of a tool without thinking of its purpose. As an example, Weiser describes the lifting of a heavy box. Either you call an assistant to help, or you automatically, and unconsciously, become stronger. Computing generally focuses on the former, whereas UbiComp aims for the latter. The emphasis on mobile computing has enabled some other interesting trends. Some major issues with trying to develop UbiComp systems were the resource constraints of power and processing on the sometimes small and mobile devices required. Weiser’s work actually forced new metrics to be established [19], such as MIPS (Millions Instructions Per Second)/Watt and Bits/Sec/m3. Due to improvements, most likely attributable to mobile computing as a whole than just UbiComp, Weiser experienced improvements of a hundred fold in MIPS/Watt in three years after a decade of no improvement. 1.2 Site Specific Services What is apparent from the literature is that Ubiquitous Computing relies on services; devices or people executing some external task through an intermediary device. This is most apparent in the area of context-awareness and location-awareness in particular (Location Aware Services - LAS, Location Based Services - LBS), the goal being to provide access to resources via services [29]. The hope is to create an environment with entities interacting to provide and use services between one another, thereby utilising local resources. The main problem here is the terminology. LAS and LBS seem to be interchangeable, and are extremely dependant on location data. A better solution is to provide services based in a location, or a site specific service [30]. This is basically a stationary LBS as described by Ibach [29], but here we remove ourselves from the ambiguous term of Location Based Services. A site specific service is much easier to relate to for users in the real world. Site Specific Services (SSS) exist in the form of ticket machines, information kiosks, interactive product catalogues, and ATMs. Adding UbiComp to the equation allows multiple users to exploit these services at once, without the normal waiting to access the machine. There are issues when providing this type of interaction. First of all, different users using the same device should discover different services from one another [31], or more
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
45
precisely a users role should determine the services available to them. Secondly, services need to either support the plethora of client hardware platforms (a futile task), or be developed using a framework that is platform independent – such as Sun Microsystems’ Java or, to a lesser extent, Microsoft’s .NET Framework. The third problem is establishing connections between devices and localized services [30]. Users will need to be aware when connection is possible, and connection will need to occur as quickly and efficiently as possible. A final point to consider in this non-exhaustive list is the storage of personalized information on some form of mobile device, to allow interactions to be as tailored as much as possible. When we view the environment as a collection of distributed sites, with some sites within sites, we do not necessarily want our personalized information travelling electronically from site to site, a more physical form is probably preferable. This removes a wealth of the privacy concerns that are apparent in the general field of UbiComp, but the single device is still a point of weakness, and security will have to be built around it. Another possible solution is to distribute information between device and servers [32], reducing the amount of personal information carried by the user, but requiring a higher level of trust of the service provider. The consensus seems to be that providing site specific services is a good form of Ubiquitous Computing, although little application of this is apparent. One such system [14] involved messages sent to phones as users passed stores, and although technologically restricted (Bluetooth enabled mobile phones only) these ideas appear to have been implemented on a commercial scale [33] by Filter UK, who provide multimedia content at locations using Bluetooth and Java. Another system [30] provides a service explorer interface, allowing users to interact with site specific services via a mobile phone. This solution does appear to be a glorified web browser that provides a UI to a nearby terminal on a phone, which in itself is not a bad idea, but does not really allow the level of interactions required for Ubiquitous Computing. Ubiquitous Computing is considered one of the Grand Challenges in Computing Research [35], an area that Milner appears interested in applying techniques from the ʌ-calculus to [36]. 2. The Underpinning Architectural Process Structures The two key underlying capability requirements of the UASSS are: x x
the mobile device contains a process, called the access process, which can identify wireless access points using a variety of technologies, and the service provider makes available an access channel that is universally known by the same name.
The first requirement implies that any mobile device not only contains the access process but also the infrastructure to run processes in a JCSP framework. This capability has been achieved and is reported elsewhere [3]. These capabilities could be added to a mobile device after initial purchase or could be incorporated into such devices when they are manufactured. The second requirement means that the wider community, involving service providers, network providers and mobile device manufacturers have to agree on a single name by which access to this type of site specific service is initiated. For the purposes of this paper it shall be called “A”. Once these requirements have been realised then the following simple set of interactions allows a mobile device to connect to any site specific service.
46
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
2.1 Detection of the Site Specific Service by the Mobile Device The access process (AP) in the mobile device detects the presence of either a wi-fi or Bluetooth wireless access point (WAP) that is transmitting data in the form of an IP address of a node upon which the address of a JCSPNet Channel Name Service (CNS) [2] is located. Most WAPs for security reasons do not transmit such data and thus by default do not participate in UASSS capability. Once AP has received the address of a CNS, it then initializes itself as a node in the service provider’s network thereby becoming an ad-hoc member of the network. AP then creates an anonymous network channel input end, the location of which is sent to the server using the access channel “A”, which is the only communication between the mobile device and the site specific server (SSS) using “A”. 2.2 Communication of an Initial Mobile Process from Server to Mobile Device The SSS receives the location of the mobile device’s network channel input end and uses that to create a network output channel. Over this channel it now communicates the initial mobile process (IMP) that is to be executed within the mobile device. On reading the IMP, the mobile device, causes it to be executed. This IMP could be the only process that is transferred but in other cases this IMP might, by means of a user interaction, determine the user’s specific requirement and cause the transfer of yet further processes. These transfers will take place over channels that are private and hence known only to the SSS and the IMP. The IMP achieves the ubiquitous nature of the interaction by ensuring a user only becomes involved in any interaction with a service provider identified in a preferences document. The IMP will thus interrogate the preferences document and will discard any communication from a service provider that does not appear therein. The mobile device, by means of the IMP, is now able to communicate with the SSS. If further channels are required for this interaction then these can be passed as properties of the mobile process and dynamic anonymous channel connections can be created in the same manner as the channel used to send the initial process. The way this capability can be utilized is now explained by a simple application. 3. The Meeting System Consider a train station or airport in which people congregate waiting for the departure of trains and flights that might be delayed. If you are traveling within a group it would be sensible to meet together during the delay. The management of the transport infrastructure has set up a means whereby such ad hoc meetings can be registered and subsequently accessed by other members of the group, provided they have previously agreed on a name by which they will recognize the group. One person registers the meeting name and a location with the service. Subsequent members of the group may try to create the same meeting so they are told that it has already been created and where it is located. Others, typically those that arrive close to the original departure time will just try to find the meeting, possibly knowing that they are unlikely to be the first person to arrive. Anyone trying to find a meeting that has not yet been created will be told that the meeting has not yet been registered. 3.1 The Access Process (AP) Listing 1 gives the Java code for the Access Process, which is generic and applicable to all sites offering the UASSS capability. This representation assumes access to only a single
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
47
SSS and that we know the IP address of the machine running the CNS. The AP, as well as the remainder of the system utilizes the jcsp.mobile package [3]. The AP is specified in Java as this process has to run on a mobile device that will only have access to a (limited) subset of the full Java environment made available on mobile devices. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
public class AccessProcess { private static ProcessChannelInput processReceive; public static void main(String[] args) { String CNS_IP = Ask.string("Enter IP address of CNS: "); Mobile.init(Node.getInstance().init(new TCPIPNodeFactory(CNS_IP))); String processService = "A"; NetChannelLocation serverLoc = CNS.resolve(processService); NetChannelOutput toServer = NetChannelEnd.createOne2Net(serverLoc); processReceive = Mobile.createNet2One(); toServer.write(processReceive.getChannelLocation()); MobileProcess theProcess = (MobileProcess)processReceive.read(); new ProcessManager(theProcess).run(); } }
Listing 1. The Access Process
The channel processReceive {line 3} is the network channel upon which the IMP will be received. The IP address of the CNS is determined by means of a console based user interaction {line 6} but in practice would be determined automatically by the device. The mobile device is then initialized as a member of the network that is the same as that of the CNS {7}. The name of the access channel “A” is defined in processService {8}. The location of the input end of the access channel is then resolved because this process writes to a process in the SSS {9} and then the output end is created {10}. The processRecieve channel is then created {11} and its channel location written to the process that inputs messages on the “A” channel {12}. The AP now reads the IMP as theProcess {13} and this is then run using a ProcessManager {14}, a JCSP class that permits a process to be spawned concurrently with the AP process. At this point the IMP will be executed and thus the processing becomes specific to the site from which it has been downloaded. 3.2 The Architecture of the Site Specific Service Server Figure 1 shows the outline of the architecture required to support a SSS, which has been specialised to the Meeting Organiser application. Some of the processes may be executed on different processes within the network but that is of no concern for this explanation. The input end of the access channel “A” is connected to the IMPServer, which is responsible for obtaining and then communicating an instance of an IMP process to a mobile device. The IMP instance is obtained from an IMPSender process. The named network channels “N” and “F” are used by the Meeting Organiser to manage the communication of instances of the NewMeeting and FindMeeting client processes. Each client service known to the SSS has its own pair of Server and Sender processes. The Sender manages a list of available client processes, such that when a service client has completed it can be reused. In this case the Meeting Database is relatively simple and just receives requests on its requestChannels. The required response channel is allocated dynamically because the input end of the channel is created within a mobile device. Once an interaction has completed, the MeetingDatabase process can inform the appropriate Sender process which service client is available for reuse, using one of the Reuse channels.
48
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
CNSServer
requestChannels Meeting Database new Serve2Send Sender (New Meeting)
Server N
newSend2Serve
Sender (Find Meeting)
Server F
findSend2Serve IMPConnection IMPServer
IMPSender
A
Meeting Organiser
Figure 1. Architecture of the Meeting Organiser Server
3.3 The Initial Mobile Process (IMP) Each mobile process within the UASSS, typically comprises two processes, one that provides the functional capability, with the second providing the user interface. The IMP (Listing 2) is no exception {17-26}. The events channel {19} provides the interface between the user interface and the capability process. In this case no configuration channel is required as the interface is used merely for input events from the interface to the capability process. The code has been created using the Groovy Parallel formulation [34]. The use of Groovy has no impact on the processing as the jcsp.mobile package automatically downloads any classes from the server that are not present on the mobile device, including any specifically needed by the Groovy system. 17 18 19 20 21 22 23 24 25 26
class InitialMobileProcess extends MobileProcess { void run () { def events = Channel.createAny2One() def processList = [ new InitialClientCapability ( eventChannel : events.in() ) , new InitialClientUserInterface ( buttonEvent : events.out() ) ] new PAR (processList).run() } }
Listing 2. The InitialMobileProcess Structure
3.3.1 Initial Client Capability Process The InitialClientCapability process (Listing 3) waits to read an eventType {31} from the user interface process and then depending upon whether the user has pressed the “Create New Meeting” or “Find Existing Meeting” button sets the serviceName to “N” or “F” as required {34, 37}. These service names are private to the SSS and are connected to respective Servers as shown in Figure 1. The serviceName is resolved with the CNS {39} so that we can create a network output channel from this process to the required server {40}. An anonymous mobile process input channel processReceive is then created {41} and its location written to the server {42}. This message indicates that a mobile device requires an instance of the specific process and also informs the server which network
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
49
address should be used to communicate the mobile client process. The process is then read {43} and executed {44}. When the loaded process terminates all the resources used by the mobile device are automatically recovered. 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
class InitialClientCapability implements CSProcess { @Property ChannelInput eventChannel void run () { def eventType = eventChannel.read() def serviceName = null if ( eventType == "Create New Meeting" ) { serviceName = "N" } else { serviceName = "F" } def serverLoc = CNS.resolve(serviceName) def toServer = NetChannelEnd.createOne2Net(serverLoc) def processReceive = Mobile.createNet2One() toServer.write(processReceive.getChannelLocation()) def theProcess = (MobileProcess)processReceive.read() new ProcessManager(theProcess).run(); } }
Listing 3. The Initial Client Capability Process
3.3.2 The Initial Client User Interface The InitialClientUserInterface is a simple interface with two buttons called “Create New Meeting” and “Find Existing Meeting”. The interface uses the active AWT components that are part of JCSP. 3.4 The New Meeting Client For the purposes of explanation we shall describe the New Meeting Client only. 3.4.1 The Meeting Data Object MeetingData (Listing 4) has been created to send data from the mobile device and also to return results of the request from the MeetingOrganiser to the user client process. It has to implement the Serializable interface because it is communicated over the network. In general, care must be taken accessing objects communicated between processes to ensure no aliasing problems. In this case, this problem is alleviated because all communication is over a network, so the underlying system makes a deep copy of any object. The property returnChannel {48} holds the net location of an input channel that is used to return results to the user process in the mobile device. The clientId {49} is the identity of the client process used by this user process. The properties meetingName {50} and MeetingPlace {51} are text strings input by the user of the mobile device to name the meeting and the place they are to meet. The property attendees {52} indicates the number of people that have already joined the group of people that form the meeting. 47 48 49 50 51 52 53
class MeetingData implements Serializable { @Property returnChannel @Property clientId @Property meetingName @Property meetingPlace @Property attendees }
Listing 4. The MeetingData Object
50
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
3.4.2 New Meeting Client Process The process follows the same pattern used before in that it comprises capability and user interface processes as shown in Listing 5 and extends MobileProcess [3]. The property clientServerLocation {55} is the location of a network input channel that forms one of the requestChannels shown in Figure 1. The property clientId {56} indicates which of the available NewMeetingClientProcesses has been allocated to this user’s mobile device. It will subsequently be used to indicate that the process can be reused once the interaction has completed. Connecting NewMeetingClientCapability to NewMeetingClientUserInterface, a set of event and configuration channels is defined {59}. The clientProcessList is then defined {60-69} comprising the two processes. The processes are passed property values that connect the capability process to the user interface and vice versa. The process is then executed by means of a PAR {70}. 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
class NewMeetingClientProcess extends MobileProcess { @Property NetChannelLocation clientServerLocation @Property int clientId void run () { // define event and configuration channels between processes def clientProcessList = [ new NewMeetingClientCapability ( clientId : clientId, clientServerLocation : clientServerLocation, // and interface channel connections ) new NewMeetingClientUserInterface ( // interface channel connections ) ] new PAR ( clientProcessList ).run() } }
Listing 5. The New Meeting Client Process
3.4.3 The New Meeting Client Capability Listing 6 shows the relatively simple process associated with creating a new meeting. The property clientServerLocation {74} is the connection to the requestChannel shown in Figure 1 and clientId {75} is the number of the client process being used. The event and configuration channels used to create the connection between the capability and user interface are defined. A network output channel is then created that connects the capability process to the meeting organizer service is then created using the clientServerLocation property {79}. The network channel used to return the results from the server to this client is then defined {80}. Lines {81-86} defined an instance of MeetingData, which is then populated with the necessary data values. The property returnChannel {82} contains the location of the input network channel and the meetingName {84} and meetingPlace {85} properties are obtained directly from the user interface. The clientData is then written to the meeting organizer server {86}. The mobile device then reads the replyData {87}. The value of attendees indicates whether this meeting has been created or whether the meeting had already been created {88-95}. In either case the user is told where the meeting is taking place. The number of attendees already at the place is also written to the user interface.
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97
51
class NewMeetingClientCapability implements CSProcess { @Property NetChannelLocation clientServerLocation @Property int clientId // event and configuration channels that connect UI to NMCC void run () { def client2Server = Mobile.createOne2Net(clientServerLocation) def server2Client = Mobile.createNet2One() def clientData = new MeetingData() clientData.returnChannel = server2Client.getChannelLocation() clientData.clientId = clientId clientData.meetingName = meetingNameEvent.read() clientData.meetingPlace = meetingLocationEvent.read() client2Server.write(clientData) def replyData = (MeetingData) server2Client.read() if ( replyData.attendees == 1 ) { registeredConfigure.write("Registered") } else { registeredConfigure.write("ALREADY Registered") } registeredLocationConfigure.write(replyData.meetingPlace) attendeesConfigure.write(new String (" " + replyData.attendees) ) } }
Listing 6. New Meeting Client Capability
3.4.4 New Meeting Client User Interface The user interface process is simply a collection of interface components connected to the capability process by a set of event and configuration channels. The interface is created as an ActiveClosingFrame. The input container comprises two labels and two ActiveTextEnterFields that are used to enter the name of the meeting and the place where the people are to congregate. The response container gives the user feedback as to whether the meeting was created or if somebody had already registered the meeting and its location. The active components that make up the interface are added to a process list and then invoked using a PAR. 3.5 The Processes Contained Within the Meeting Organiser Site Specific Server Figure 1 shows the processes contained within the SSS, which shall now be described. The server and sender processes are generic and thus independent of the particular client that is to be communicated. 3.5.1 The Server Process The Server process (Listing 7) has properties that include its channel {100,101} connections to its related sender process and the name of the service it provides. The serviceName {202} is the name of the private network channel that an IMP will use to obtain a mobile client for a specific service. The server is defined {105} as an instance of MultiMobileProcessServer, defined in jcsp.mobile [3]. The server is then initialised with the channel connections to its sender process and the serviceName {106} and invoked {107}. Such a server waits for a request on a network channel with the same name as serviceName. The request takes the form of a network channel to which it can write a mobile client process. The server process then signals its need for an instance of the mobile client process on its toSender channel. It
52
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
then reads the mobile client process from its fromSender channel and communicates it to the mobile device using the network channel identified in the original request. 98 public class Server implements CSProcess { 99 100 @Property ChannelInput fromSender 101 @Property ChannelOutput toSender 102 @Property String serviceName 103 104 void run() { 105 def theServer = new MultiMobileProcessServer() 106 theServer.init(serviceName, fromSender, toSender) 107 new PAR ([theServer]).run() 108 } 109 }
Listing 7. The Generic Server process
3.5.2 The Sender Process The nature of the Sender process (Listing 8) will vary depending upon the requirements of the service being provided. The properties identify the channels that connect the Sender to its related Server process {112,113} and the channel upon which it receives inputs informing it that a client can be reused {114}. It receives a List of clients {115} that have been allocated to this server. Typically these client processes will all be instances of the same mobile client process, which have the required network input channel locations embedded in each instance. Network channels that are used to output results from the service to the mobile device have to be created dynamically see {80, 82, 86}. 110 public class Sender implements CSProcess { 111 112 @Property ChannelOutput toServer 113 @Property ChannelInput fromServer 114 @Property ChannelInput reuse 115 @Property List clients 116 void run() { 117 def serviceUnavailable = new NoServiceClientProcess() 118 def n = clients.size() 119 def clientsAvailable = [] 120 for (i in 0 ..< n) { 121 clientsAvailable.add(clients[i]) 122 } 123 def alt = new ALT ([reuse, fromServer]) 124 def index, use, client 125 while (true) { 126 index = alt.select() 127 if (index == 0 ) { 128 use = reuse.read() 129 clientsAvailable.add(clients[use]) 130 } 131 else { 132 fromServer.read() 133 if (clientsAvailable.size() > 0 ) { 134 client = clientsAvailable.pop() 135 toServer.write(client) 136 } 137 else { 138 toServer.write(serviceUnavailable) 139 } 140 } 141 } 142 } 143 }
Listing 8. The Generic Sender Process
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
53
In this case the Sender implements a simple quality of service capability by sending a NoServiceClientProcess {117, 138} if none of the actual client process instances can be used. Each of the client processes is added to a list of available clients {119-122}. The Sender alternates over the reuse and fromServer channels {123} and the never ending loop {125} either adds a client process to the clientsAvailable list, in the case of an input on the reuse channel {127-130}. Otherwise, a request from a mobile device is read from an IMP and either a client or the service unavailable process is sent {132-136}. 3.5.3 The Meeting Process 144 class Meeting implements CSProcess { 145 @Property List requestChannels 146 @Property ChannelOutput nReuse 147 @Property int newClients 148 @Property ChannelOutput fReuse 149 @Property int findClients 150 151 void run() { 152 def meetingMap = [ : ] 153 def alt = new ALT (requestChannels) 154 def newMeeting = new MeetingData() 155 def findMeeting = new MeetingData() 156 def replyData = new MeetingData() 157 def index 158 while (true) { 159 index = alt.select() 160 switch (index) { 161 case 0 ..< newClients : 162 newMeeting = requestChannels[index].read() 163 def reply = Mobile.createOne2Net(newMeeting.returnChannel) 164 if ( meetingMap.containsKey(newMeeting.meetingName ) ) { 165 replyData = meetingMap.get(newMeeting.meetingName) 166 replyData.attendees = replyData.attendees + 1 167 } 168 else { 169 replyData = newMeeting 170 replyData.attendees = 1 171 } 172 meetingMap.put ( replyData.meetingName, replyData) 173 reply.write(replyData) 174 nReuse.write(replyData.clientId) 175 break 176 // case to deal with find meeting requests 177 } // end of switch 178 } // end of while 179 } // end of run() 180 }
Listing 9. The Meeting Database Process
The Meeting process (Listing 9) implements the meeting database shown in Figure 1. Its properties {145-149} are directly related to that diagram and comprise the requestChannels, and the two channels by which client processes that can be reused are identified. The number of new and find meeting client processes is also required. A map, meetingMap {152} is used to hold the name of the meeting, as its key, and the meeting location, the map entry. It is presumed that the requestChannels list has been ordered such that all new meeting client requests are in the first part of the list and these are followed by the find meeting requests. Hence the Meeting process has to alternate over the two parts of the list as shown {161, 176}. In the case of new meeting requests {162-175} the request is read into newMeeting {162}. A network reply channel is created from the returnChannel attribute of the
54
J. Kerridge and K. Chalmers / Ubiquitous Access to Site Specific Services by Mobile Devices
MeetingData object {163}. If the meetingMap already contains an entry for this meetingName {164} then the number of attendees is incremented in the replyData object {165-166}, otherwise it is set to 1 {170}. The entry in the meetingMap is then refreshed
{172}. The reply is written back {173} to the mobile device that is currently executing the mobile process NewMeetingClientProcess that is able to interpret the reply appropriately {88}. The clientId of the client undertaking this interaction can then be written on the nReuse channel {174} so that this client can be reused. In this simple case the interaction between mobile device and server is such that only one request and reply are required and thus the mobile client process can be reused immediately. The implementation of the find meeting processing is similar. 3.5.4 The Meeting Organiser The Meeting Organiser (Listing 10) is a Groovy script that instantiates the server components given in Figure 1. 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222
def CNS_IP = Ask.string("Enter IP address of CNS: ") Mobile.init(Node.getInstance().init(new TCPIPNodeFactory(CNS_IP))) def nSize = Ask.Int("Number of Concurrent New Meeting Clients? ", 1, 2) def fSize = Ask.Int("Number of Concurrent Find Meeting Clients? ", 1, 3) def newRequestLocations = [] def netChannels = [] for (i in 0 ..< nSize) { def c = Mobile.createNet2One() netChannels ack?i -> PLAYER1(i) (mov!i -> response.i?result-> PLAYER1(i)) [] (quit!i -> PLAYER(i))
The input channels (response) of the players are renamed OWBtoPlayer, and the outputs of the overwriting buffers are connected to these new channels; the response channel is assigned to the input of the overwriting buffers, thus making the insertion of the buffer transparent to the physics server: BufPLAYER(i, BufDepth)
=
OWB(BufDepth, response.i, OWBtoPlayer.i) [|{| OWBtoPlayer.i |}|] PLAYER(i)[[response.i ack!i -> physics_SERVER) [] (mov?i:PLAYERS -> check2_db!i -> mov_update -> UpdateAll -> physics_SERVER)
The server and player models are connected to make up the entire system. The server system (SERVERSYS) is defined by first linking the databases with their respective servers via the appropriate channels (check1_db and check2_db) and then joining the two subsystems, via the channel LgnToPhy, to create SERVERSYS. The complete system (SYSTEM) is then constructed by synchronizing SERVERSYS via the appropriate channels with the interleaved composition of the players (ALLPLAYERS). (Note that several of the internal channels are actually hidden when the verification is run.). LGNSYS
= Login_SERVER [|{|check1_db|}|] Login_dbase_SERVER
PHYSYS
= (physics_SERVER [| {| check2_db|} |] Mov_dbase_SERVER)
SERVERSYS
= (LGNSYS [| {| LgnToPhy, ack |} |] PHYSYS)
ALLPLAYERS = |||i:PLAYERS@(BufPLAYER(i, depth)) SYSTEM
= ALLPLAYERS[|{|login_chan, mov, response|}|]SERVERSYS
2.3 The Specification The major verification goals are absence of deadlock and livelock and the confirmation that one (or more) dying players will not block communication with the remaining players. The specification thus simply consists of a single player that does not fail. We set the depth of the overwriting buffers at one, since this is sufficient to demonstrate that a failed player will not block the servers. (We are more liberal in the Java implementation below.) The implementations to be tested against this specification are systems of two and three players, one of which plays correctly as defined above (PLAYER(i)). The other
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
141
players are modified so that they terminate (via SKIP) immediately after executing login_chan!i; this is sufficient to cause the Physics Servers to attempt to send them messages – which will not be accepted. These two versions (with appropriate hiding) should refine the specification. 2.4 Verification The FDR tool verified that systems of one, two, and three working players were free of deadlock and livelock, and that the two systems including the terminating players did refine the working single player specification. The overwriting buffers are thus successful at absorbing messages that can not be delivered. 3. Implementation in JCSP.net 3.1 JCSP.net JCSP is a library that provides facilities to implement channel-based CSP systems in Java [9, 10]; channels are initialized to have an “output end” at the sender and an “input end” at the receiver, respectively. The JCSP.net version includes the option of setting up channels over the network by using a centralized channel name server (CNS). Channels connecting processes located on different machines (or invoked independently on one machine) must be registered at the CNS. Channel ends can be moved around a network, providing a great deal of flexibility and the possibility making major changes in a system dynamically. Early experiments [10] on cluster systems indicated that JCSP.net could outperform similar toolsets such as mpiJava and IBM’s TSpaces. Distributed robust annealing (a JCSP.net version of [18]) and real-time systems [20] monitoring movement of pedestrians have also been successfully demonstrated. JCSP.net should be ideal for game systems. 3.2 JCSP.net Implementation of the Servers 3.2.1 Login and Physics Servers The structure of the game server system during the startup phase is shown in Figure 2. The system consists of the Login Server, the Login Database Server, and one or more combinations of a Physics Server and its associated database. Each Physics Server is associated with a particular game domain. Initially, players do not know the name of the channel via which they will communicate with their physics server. The Login Server has an Any-to-One channel, registered at the CNS, over which players (knowing the channel name) register to initiate play. When a player registers it will send along the name of the channel (dashed) over which it will listen for the Physics Server. The Login Server will then pass this channel and the player's request to the Physics Server. The Database Server connected to the Login Server is used to check the player’s ID; if successful, the login is validated. If the player is starting a new game, a random location in the game domain is assigned to the player; otherwise, a player will be placed at his last location. The Physics Server is informed of the player’s ID, its game location, the channel over which the player will listen, and possibly other information about the player's status. The Player is sent the same information, along with the name of the shared (anonymous) channel that will be used by the player to communicate with the Physics Server. The login server is then ready to accept login requests from other players.
142
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
The Physics Servers are associated with specific geographical zones in the game world, and each has its own database. These servers can be distributed over the network on different machines. Figure 3 shows the configuration of a Physics Server with its database after the players have connected over the shared channel. Overwriting buffers (the bubbles marked O in Figures 2 and 3) are used on the channels from the physics servers to the players. As noted above, the buffers maintain data flow if the network is slow on a player’s side, and allows a server to continue if a player ceases to accept messages. Physics Database Server
Login Database Server
Login Server
Physics Server
Any-to-One channel
Any-to-One channel
o o Pi+1 Pi
Figure 2. At startup, Players request a specific Physics Server from the Login Server and submit the channel over which they will listen via an overwriting buffer; Players are then informed of the appropriate channel to connect to the desired Physics Server.
The code fragments below show the creation of a shared (anonymous) channel by a Physics Server, the determination of the location of that channel by the Login Server, the transmission of the location from the Login Server to its reception by the Player, and the assignment of that location to a NetChannelOutput at the player. toPlayer and fromServer are the write and read ends of the channel from the Login Server to the Player; playerOut is the write end of the channel from the Player to the Physics Server. // Physics server: NetAltingChannelInput anonChannel = NetChannelEnd.createNet2One(); // Login server: NetChannelLocation serverLocation = anonChannel.getChannelLocation(); toPlayer.write(serverLocation); // Player: NetChannelLocation serverLocation = (NetChannelLocation)fromServer.read(); NetChannelOutput playerOut = NetChannelEnd.createOne2Net(serverLocation);
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
143
During play, each Physics Server receives move requests from players, updates its corresponding database server accordingly, and informs (as necessary) players in its zone of other players’ moves. If a player moves from one zone to another, the player data in the previous server is moved to the new server to which the player now belongs. This movement is easily implemented using the mobile channel ends of JCSP.net.
Physics Database Server
Connections to login server
Physics Server
Shared channel O
O
Pi
Pi+1
Figure 3. A Physics Server with players connected; overwriting buffers (O) between the Physics Server and the players protect the server if a player is either slow at accepting or ceases to accept incoming messages.
We stated above that the size of an overwriting buffer should be chosen to accommodate (at least) the number of messages one process can send to a second before waiting for a reply from the second since, if the buffer is too small, data could be overwritten before the second process fetches it [18]. In this application, however, there is no limit on the number of messages the server can send to a player, as the moves of other players may generate an unlimited amount of traffic. We have arbitrarily chosen a buffer of size four in our tests. (In practice, if packets are overwritten, the buffer could be expanded on the fly. Adding sequence numbers to packets would allow detection of overwriting.) We use the JCSP class OverWritingBuffer to create a channel end with an OWB of size four; the appropriate code is: NetChannelInput Server2Player = NetChannelEnd.createNet2One (new OverWritingBuffer(4));
3.2.2 Database Servers There is one database server connected to the Login Server and one for each of the Physics Servers. The login database, as noted above, keeps track of registered players. The physics databases store information on the location and status of the players and inanimate objects
144
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
(game items: treasures, etc.) in each zone. The game items, randomly placed at startup, can be picked up by players. A Java hashtable [21] is used to maintain the player databases. 3.3 JCSP.net Player Model The Player starts up by first creating a channel end to connect to the shared input channel on the Login Server. It next creates another channel end on which it will receive messages from the physics server. The Player then logs in, sending its user ID and the physics channel end to the Login Server. The Login Server will pass 1) the required player information (including the physics channel end received from the player) to the appropriate Physics Server, and 2) the physics channel end to communicate with the Physics Server to the player. The Physics Server will then connect with the player over the received channel, sending the Player the shared channel end to talk back to the Physics Server. The new player receives all the information about items present and other players on the board. (Note that the Player need not have information about the location of physics servers; this information is hidden from players, thus providing some server security.) The player-side (non-human) model is divided into three processes – keyboard, screen, and graphics. The keyboard generates random moves and sends them to the physics server (randomly dispersed in time to model an actual player; the random delay has been set to be between 2 and 5 seconds). The screen receives updates of the moves. The graphics section is connected to the screen and takes care of refreshing the board on the player side. The graphics is based on the Java Swing API (the graphics were turned off in the performance tests below). 4. Performance Analysis 4.1 Previous Work on Performance Measures Several groups have recently studied multiplayer game performance. Distributed server systems have been designed and analyzed to measure the traffic behaviour over the network. The major concern with networked traffic is transmission delay or latency. Other issues such as bandwidth requirements, fault tolerance, and expansion of the system to allow more subscribers are also discussed. 4.1.1 Network Transmission Delay (Latency) Network transmission delay or latency of the network is defined as the time taken for the packet sent from the sender to reach the receiver. The main contributions to latency are the performance of the transmission medium (optical fiber, wireless, etc.), the size of the packet, router hardware, and other processing steps at the time of transmission. Farber [15] describes the traffic scenario for an online game system with 30 players. Their tests consisted of a client-server model to evaluate the fast action multiplayer game “Counter Strike” played over a LAN. Each player lasted 30 to 90 minutes and the traffic was observed for 6.5 hours. Client and server traffic were studied separately. The player packet size was around 80 bytes. Tests showed that a round trip time of 50-100 milliseconds is sufficient for fast play games. 4.1.2 Bandwidth Requirements Pellegrino et al. [22] have described a procedure to calculate the bandwidth requirements, both on the player and server sides. They note that the client-server architecture is not
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
145
scalable as it requires large bandwidth, but overhead on the player side is the drawback in peer-to-peer architecture. A model combining the merits of the two architectures is proposed to have the lowest bandwidth requirement. Their tests had four players playing BZFlag, an open-source game, on Pentium-III based PCs running Redhat Linux 7.0 connected via a Fast Ethernet Switch. The player update period, TU, referred at the player side, is defined as the time taken for the player to make a move, the move to reach the server, the server to update the database and all players, and the server to send an acknowledgement to the player; the random delay between two consecutive moves is also included. The number of bytes sent by the player is LU. The upstream bandwidth at the client side is the bandwidth required for every move, i.e. LU/TU. The client downstream bandwidth includes updates received from the player for each move made by other players, i.e. N(LU /TU). Thus, Total Client Bandwidth = Client Upstream Bandwidth + Client Downstream Bandwidth = (N+1) LU/TU . Similarly, the total server bandwidth is N(N+1)LU/TU, indicating that it scales quadratically. 4.2 Simulation Setup Our tests were done by running the entire game system on Windows and on Linux; we also did a smaller test using Linux and Windows at the same time. There was no substantial difference in the performance on the different operating systems. For the primary Windows tests we set up a game board with a playing field of 1,000 u 1,000 cells. This field is split into five zones of 200 u 1,000 cells, with each zone managed by one Physics Server. The structure of this world is shown in Figure 4. We place the Login Server on one machine and the Physics Servers on five additional machines. The players are allocated to another 25 machines: tests were run with 5, 10, 20, 50, 100, 200, 400, 700, and 1,000 players. The performances of the servers, the players, and the response times for both the servers and players were recorded. The machines were Pentium-IV (3 GHz) PCs running Windows XP with 1 GB of RAM each; they were on a 100 Mbps connection. (0,0)
(0, 1000)
(200,0)
(400,0)
(600,0)
(800, 0)
(1000, 0)
Z O N E
Z O N E
Z O N E
Z O N E
Z O N E
1
2
3
4
5
(200, 1000)
(400, 1000) (600, 1000) (800, 1000)
(1000, 1000)
Figure 4. Game board with five zones, each managed by one physics server.
146
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
4.3 Performance Results The player end-to-end response times were measured for a number of players running from 100 (10 players on each of 10 player machines) to 1,000 (40 on 25). The flow involved in calculating player end-to-end time is shown in Figure 5. The Java date class was used for measuring time. Table 1 shows the detailed results for each of the five physics servers. The player endto-end response time with 100 players is about 16 milliseconds. The response time increases to about 65 ms with 1,000 players; this is well within the acceptable value of 100 ms. Figure 6 shows the average response times over all physics servers for 100, 300, 500, 700, and 1,000 players. An increase of players by a factor of 10 yields a factor of only 4 in the response time. The bandwidth required, both at the player and server side, is calculated using the equations provided by Pellegrino [22]. The average time between player moves, TU, is 3.5 seconds. The maximum value of N is assumed to be 200 as there are five servers each taking around 200 players. TU = 3.5 seconds, LU = 80 bytes = 80 * 8 = 640 bits Client upstream bandwidth = 640/3.5 = 183 bits/second Client downstream bandwidth = 200 * 640/3.5 = 36,572 bits/second Total client bandwidth = 36,755 bits/second Total server bandwidth = N * (N+1) LU/TU = 200* 201*183 = 7.36 Mb/sec.
Start clock
Send move from Keyboard
Receive update from server at screen
Send ack from screen to keyboard to make next move
Stop clock
Measure time
Figure 5. Flow chart to calculate response times.
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
147
Table 1. Player End-to-End Response Time Players
Average player end-toend time (in ms) 16 16 16 25 28 41 47 57 60 66
100 200 300 400 500 600 700 800 900 1,000
Machines u players
Player end-to-end response times for servers 1 to 5 (players:time in ms) 1 2 3 4 5
20: 16 39:16 51:16 64:16 80:31 96:32 122:47 133:47 149:47 166:63
17:16 33:16 55:16 78:16 97:16 126:32 135:47 156:47 174:63 189:63
23:16 42:15 68:16 93:32 116:31 137:47 167:62 180:63 208:63 223:63
18:16 43:16 68:16 81:31 92:31 108:47 122:47 155:63 167:63 192:63
22:16 43:16 60:16 84:31 115:31 133:47 154:47 176:63 202:63 230:78
10 u 10 10 u 20 15 u 20 20 u 20 25 u 20 20 u 20, 5 u 40 15 u 20, 10 u 40 10 u 20, 15 u 40 5 u 20, 20 u 40 25 u 40
The bandwidth at the player side and server side are thus about 36.8 kbps and 7.36 Mbps, respectively (this includes TCP/IP header information). With 40 players per machine, the bandwidth required would be about 1.5 Mbps. These values are well within the 100 Mbps of the speed of this network. The response time can be improved by adding more physics servers to share the load. This, in turn, would give better player end-to-end response time. Because each player machine in the test had 40 players, the network bandwidth and the hardware resources at each machine affected the player-side timing. With fewer players per machine, the player-side response time should improve as hardware and network performance would be better. In Table 1, the response time up to 500 players is low as the maximum number of players in each machine is 20. Once we had 40 players per machine, the increase in response time was steeper. This jump was seen when the number of players was increased from 500 to 600, a point where five machines were loaded with 40 players. Moving players between zones did not appear to add significant overhead.
Player end-to-end response time in ms
100 90 80 70 60 50 40 30 20 10 0 100
200
300
400
500
600
700
800
900
Number of players
Figure 6. Average player end-to-end response time vs. number of players.
1000
148
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
The CPU load on the machines running 40 players was nearly 90%. On the serverside, the CPU load was around 85%; this is within the industry standard, but is higher than recommended (70%). A much smaller test was run on 10 Linux machines running Suse 9. These machines were not identical to the Windows machines and detailed measurements were not made. No problems were encountered, however, and the performance appeared comparable. 4.4 Fault Tolerance We have run a limited number of tests for fault tolerance. As described above, we explicitly added features (the overwriting buffers) so that a failure to respond on the part of one or more players would not prevent other players from continuing. This was tested by disconnecting active players; the system continued to service properly the remaining players. We were also able to shut down one of the physics servers with no effect on the rest of the system (players on the system shut down would lose any state not yet transferred to the physics database and, in the absence of overwriting buffers between the login and physics servers, a login server attempting to contact a dead physics server would stall.). 4.5 Summary The overall performance of our system matches the industry standards of online games. The player delay is in the range of 16 to 80 milliseconds, which is acceptable for fast action games such as racing or combat. Online games that have comparatively slow action, such as role-playing games, can tolerate delays of up to 150 milliseconds and would have a larger margin. 5. Summary and Future Work The project began with a detailed study of online gaming. A simple game system was then designed in CSP and verified to be deadlock-free using FDR. The next stage of the project was to implement the design in Java using JCSP.net. Tests on both Linux and Windows were successful. The final results showed that a system of five servers and 1,000 players on 25 machines met our performance goals and the timing requirements of online games. The project also showed that a robust gaming system with a significant amount of concurrency could be efficiently developed using a CSP-based approach. For game developers, this project demonstrates the facilities of JCSP.net and introduces them to game design using a formal software engineering methods and appropriate verification tools. This approach should lead toward better verification and testing before releasing the game in the market, and decreased development time. This game system can easily be improved and modified to suit the latest trends. More game features, such as community development and instant chat, could easily be added; the chat feature would be particularly easy to implement using JCSP.net's mobile channels (we assume that players are not behind firewalls). On the server-side, servers could easily be added to support more players. This can be done dynamically in JCSP.net. Standard faulttolerance techniques based on additional back-up servers could be implemented to make the system more robust. The game should be easily implemented in occam-S.net and C++CSP.net, two other systems based on CSP. The hash table used on the servers storing player data is based in the primary memory; it requires RAM of 1GB and a good processor (Pentium-IV). This can be modified to store
S. Kumar and G.S. Stiles / A JCSP.net Implementation of a Massively Multiplayer Online Game
149
the data on a hard disk, which would allow access when the server is shut down during maintenance or crashes. We would, however, see a decrease in performance. A more robust system could employ commercial databases such as the Oracle Database 10g. Commercial databases have self-managing facilities, which help to maintain the data during faults or crashes. Acknowledgements Many thanks to Allan McInnes for his thorough preliminary review of this paper, and to the very conscientious CPA 2006 reviewers. References [1] [2] [3]
JCSP.net Home Page. http://www.jcsp.org/ J. C. McEachen, “Traffic analysis of internet-based multiplayer online games from a client perspective,” in ICICS-PCM, IEEE, vol. 3, pp. 1722-1726, Singapore, 2003. J. Krikke, “South Korea beats the world in broadband gaming,” IEEE Multimedia, vol.
10, Issue 2, pp. 12-14, April/June 2003. [4] [5] [6]
[7] [8] [9] [10] [11] [12] [13] [14]
[15] [16] [17]
T. Nguyen, B. Doung, and S. Zhou, “A dynamic load sharing algorithm for massively multiplayer online games,” in 11th IEEE International Conference on Networking. pp. 131-136, 2003. Zona Inc.: http://www.zona.net/company/business.html. McInnes, Allan I. S., "Design and Implementation of a Proof-of-Concept MMORPG Using CSP and occam-ʌ", in Proc. 2005 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'2005), vol. 1, pp. 194-200, CSREA Press, Athens, Ga., ed. H. R. Arabnia, Las Vegas, CSREA Press (USA), June, 2005. A. W. Roscoe, “The Theory and Practice of Concurrency,” Prentice Hall Series in Computer Sciences, ed. R.B. C.A.R Hoare. Hetfordshire: Prentice Hall Europe, 1998. F.S.E. Ltd., Formal Systems Software: http://www.fsel.com/software.html Quickstone Technologies Ltd.: http://www.quickstone.com/xcsp P. H. Welch and B. Vinter, “Cluster Computing and JCSP Networking,” in Communicating Process Architectures 2002, pp. 203-222, IOS Press, Amsterdam. T. Alexander, “Massively multiplayer game development,” Charles River Media, Inc. Massachusetts, 2003. T. Barron, “Multiplayer Game Programming,” Prima Publishing, Roseville, 2001. BigWorldTech: http://www.bigworldtech.com. D. Bauer, S. Rooney and P. Scotton, “Network infrastructure for massively distributed games,” 1st Workshop on Network and system support for games, ACM Press, Bruanschweig, Germany, pp. 36-43, New York, 2002. J. Farber, “Network Game Traffic Modelling,” University of Stuttgart, Stuttgart, Germany, 2002. Cybernet’s OpenSkies: Networking engineer - Introduction: http://www.openskies.net/files/Introduction.pdf. “Reasons to use Coherence™: Increase application reliability”: http://www.tangosol.com/ coherence-uses-a.jsp
[18] G. S. Stiles, “An occam-pi Implementation of a Verified Distributed Robust Annealing Algorithm,” in Proc. 2005 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'2005), vol. 1, pp. 208-218, CSREA Press, Athens, Ga., ed. H. R. Arabnia, Las Vegas, CSREA Press (USA), June, 2005. [19] University of Kent at Canterbury: http://www.cs.kent.ac.uk/projects/ofa/jcsp/ [20] S. Clayton and J. M. Kerridge, “Active Serial Port: A Component for JCSP.Net Embedded Systems,” in Communicating Process Architectures 2004, pp. 85-98, IOS Press, Amsterdam. [21] Java Documentation: http://java.sun.com/j2se/1.3/docs/api/java/util/ Hashtable.html
[22] J. D. Pellegrino and C. Dovrolis, “Bandwidth requirement and state consistency in three multiplayer game architectures,” NetGames, May 22-23, 2003. [23] Gregory R. Andrews: “Foundations of Multithreaded, Parallel, and Distributed Programming”, Addison Wesley, 2000.
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
151
SystemCSP – Visual Notation Bojan ORLIC and Jan F. BROENINK CTIT and Control Engineering, Faculty of EE-Math-CS, University of Twente P.O.Box 217, 7500 AE Enschede, the Netherlands {B.Orlic, J.F.Broenink}@utwente.nl
Abstract. This paper introduces SystemCSP – a design methodology based on a visual notation that can be mapped onto CSP expressions. SystemCSP is a graphical design specification language aimed to serve as a basis for the specification of formally verifiable component-based designs of distributed real-time systems. It aims to be a graphical formalism that covers various aspects needed for the design of distributed real-time systems in single framework. Keywords. CSP, Formal methods, Graphical modeling
Preamble This paper focuses on visual elements of SystemCSP notation. The paper is accompanied with the second paper[1], that puts focus on the component related part of the SystemCSP design methodology. Introduction CSP is a relevant parallel programming model and the design specification method introduced in this paper aims to foster its utilization in practice of component-based software development. According to [2], “CSP was designed to be a notation and theory for describing and analyzing systems where primary interest arises from the way in which different components interact”. CSP came into the world of practical software development, via the occam programming language. Our research is situated in a context where several preceding projects were dealing with ways to structure concurrency in complex control systems using occam-like approaches. One of the deliverables of previous projects is GML[3], a visual modeling language for specification of concurrent systems. GML is geared towards producing occam-like programs. It provides a lot of design freedom in early stages of design by relying on idea to relate processes via binary compositional relationships instead of starting immediately with occam-like constructs. With SystemCSP, we attempt to make a paradigm shift from occam towards CSP. CSP offers more expressiveness for specifying concurrent systems then its occam related subset. In addition, experiences with GML lead to the conclusion that although binary relationships are useful in early stages of design, they tend to clutter readability of even relatively simple diagrams. Instead of binary relations as used in GML, SystemCSP defines visual control flow elements that cover all relevant CSP operators. Still, the benefit obtained by specifying binary relationships in early stages in design, is utilized by allowing certain set of binary
152
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
relationships (different than in GML) to be specified among components in special interaction diagrams. In SystemCSP, the same component can appear in many interaction diagrams and all specified binary relationships in such diagrams come together in a single execution diagram. More details on interaction and execution diagrams and other issues concerned with the component framework of SystemCSP is the topic of a related paper[1]. The SystemCSP notation is applicable for specifying, documenting, visualizing and formal verification of component-based designs. This includes design of both processes and components. SystemCSP provides a way to visualize architecture, behavioral patterns of components, intra–component interactions and execution relations among components. The notation makes a distinction between components, interaction contracts and processes. CSP is focused on the interaction between processes. A process is viewed as a behavior described via some named pattern of event synchronizations. The term component is used as in modern notions of software development: “A component is a unit of composition with contractually specified interfaces and fully explicit context dependencies that can be deployed independently and is subject to third-party composition” [4]. From a CSP point of view, the behavior of a component is captured as a complex process, described with the help of one or more auxiliary processes. An interaction contract is a process or a component that has the responsibility to manage interaction of the involved components. SystemCSP is based on the principles of both component-based design and CSP process algebra. Such a combination promises to offer a more structured approach and more expressiveness than the one offered by the occam-like approach targeted in GML. Graphical elements introduced in SystemCSP are related to basic elements of the CSP process algebra. In this way, designs have immediate mapping to CSP expressions. Any CSP description can be mapped to an appropriate graphical representation in SystemCSP. This paper introduces all graphical elements used in SystemCSP. The related companion paper[1] puts more focus on component-based aspects of software development in SystemCSP. The first section deals with state of the art in software design with special focus on specifying concurrency and especially CSP-based approaches. The second section introduces elements of SystmCSP that visualize CSP language elements. The third section introduces elements of SystemCSP that are relevant part of the notation, but are not directly related to CSP concepts. The last section attempts to make a comparison between SystemCSP and some other approaches used for visualization of software models. 1. State of the Art in Visualizing Concurrent Systems – Focus on CSP Based Systems Humans can much better comprehend and communicate visualized behavioural scenarios then the same scenarios given via some mathematical description, e.g. CSP formulas. Still mathematical descriptions are necessary for precise analysis of models. A workaround to using mathematical descriptions directly in software design is to introduce a set of intuitive visual elements that can be automatically mapped onto mathematical descriptions. 1.1 Finite State Machines In CSP related books[2, 5], often (state) transition diagrams are used to illustrate CSP interaction patterns. Those are in fact Finite State Machine (FSM) kind of diagrams. Every node in such FSM represents a state of the process and every edge/transition is associated with some event. Figure 1 represents one CSP description and its associated visualization based on a FSM. In fact, the FSM in Figure 1 is a typical UML-like visual representation of a state machine. State transition diagrams in CSP books differ from UML statecharts in
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
153
depicting states as small circles with state names written outside of state circle. In addition, start and exit states bare no special visual difference to other states. The difference is of course that there are no transitions leading to the start state and no transitions leading from the exit state. One can thus visualize some CSP descriptions using transition diagrams. Looking in another direction, CSP expressions can be seen as an attempt to capture visual FSM specifications in the form of a language-like sequential stream of characters. In order to transform a FSM into CSP expressions, some states are given names and considered to be named CSP processes. Note that the same FSM can be mapped to different CSP descriptions depending on the choice of the states to which names are assigned. However, a CSP representation of a FSM, based on the minimal number of process names is unique (of course, provided that one abstracts away from differences in chosen names). Such a representation is obtained if only the start state and states reachable from more than one other state (states where a join of several control flows is performed) are named. In Figure 1, states 3 and 6 are named respectively Temp1 and Temp2, defining in such a way auxiliary processes needed as recursion entry points in the CSP description.
Figure 1. Classical FSM diagram
An obvious question here is why invent something new, if one can use FSM diagrams as they are for specifying CSP processes. If one aims to capture only simple CSP processes that contain only guarded alternatives, prefix operators and event occurrences, then a FSM is good enough abstraction. Those diagrams are however not used to depict examples containing for example parallel, internal choice, hiding and renaming operators. Other issues are that different diagrams can be drawn for the same process (depending on whether recursion is expanded and how many times), and that processes with infinite number of states are impossible to draw. Processes composed in parallel are sometimes [5] represented as rectangle boxes adorned with ports representing events in the alphabet of the process. Such process boxes are then related via lines that connect ports that synchronize. One of the approaches to a structured way of specifying concurrency derived from CCS and CSP process algebras and using state transition diagrams is FSP[6]. FSP provides a Java library implementation and a tool for model animation and checking. Compared to CSP books, this approach goes one step further in visualizing process expressions. The
154
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
initial (start) state is shaded. The letter E inside a state circle denotes the end state. A sequential composition is created by concatenating two finite state machines: making a transition from the end of a subprocess to the start of the next one in line and hiding start and end states resolved in this way. A parallel composition is also visualized via creating an equivalent state machine or alternatively again as connecting appropriate ports of boxes representing subprocesses. In addition, the notation allows systematic adding of prefixes to the names of event transitions by altering the process labels associated with states. Hiding is performed by replacing the event name with the keyword tau and reducing the state machine by merging states related by the tau labeled transitions. Renaming is performed by making a new state machine with names changed according to the replacing function. Time is introduced using tick events. 1.2 UML In software development practice, UML is the de-facto standard for visualizing software design processes. UML is designed to offer general support that can be used in various development processes [7]. UML contains several kinds of diagrams based on different basic elements and there is no mapping or firm relation between those separate views in which the same entities can participate. UML does not have precise semantics, and as such it is most useful in early stages of design for informal communication. The lack of precise semantics makes complete consistency checks between different diagrams impossible [8]. Informal way of using, based on local conventions, is often creating problems in communicating designs between stakeholders. Another significant problem with UML diagrams is that they are not able to capture in clear and intuitive way the concurrency structure of a program. According to the survey [9] on UML usage in practice: adherence to standards is loose, there is no objective criteria to verify that a model is complete or satisfies some tangible notion of quality, miscommunication is reported in more than half of the projects, “wrong” product delivered is mentioned, high amount of testing effort is needed. According to this survey, some of the main problems with UML are: design choices scattered in unrelated views, informal use, limited possibility for checking consistency between different views, disproportion between specified architectural details and the needs of implementers… In UML designs, precise semantics necessary for executable specifications can be obtained only if UML is combined with some formal language (e.g. SDL). Approaches based on a combination of UML and CSP also exist. Crichton and Davis [10] actually proposed a design pattern for specifying concurrency patterns using a subset of UML, in a way that is formally verifiable via CSP. Their approach is based more or less on the combination of statecharts and activity diagrams. But such an approach is fitting existing diagrams into something they were not designed for and it also suffers from not allowing full expressiveness of CSP. The aim of that research was to create formally verifiable concurrency design pattern using existing UML diagrams. For our purpose insisting on usage of UML diagrams is not an issue. 1.3 GML In previous research at our Lab, GML[3, 11] was developed. In GML, the recommended design process starts with process blocks existing in isolation. The first design step is making the communication structure as a standard data flow model. Next, the concurrency structure is added to this model by specifying binary compositional relationships such as sequential, (pri)parallel and (pri)alternative, between involved processes. Some relationships are known in advance and some are subject to various trade-offs. For instance,
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
155
instead of specifying immediately that there is parallel composition of processes A, B and C, one might first conclude that processes A and B should be executed in parallel. Only after the same type of relationship is made between for instance B and C, it becomes possible to group A, B and C to one parallel construct. If binary relationships are specified between any two processes, and provided such a complete specification is not illegal, then it implicitly defines grouping of processes into a tree-hierarchy of constructs. Normally, grouping is explicitly defined by using explicit grouping symbols like surrounding rectangles (box notation) or indexed bubbles (parenthesis notation) on the ends of binary relationships. Indexed bubbles at the ends of the relationships are used in a way similar to the way parentheses are used in CSP expressions. A bubble is always placed on the side of the compositional relationship next to the process that is considered to be inside the parenthesis. The index of a bubble is the number of parentheses used on that place. The CSP expression P;(T;(Q || R)) is in GML specified in one of the two ways (parenthesis and box notation) visualized in Figure 2.
Figure 2. GML models (parenthesis (adopted from [12]) and box notations)
Unfortunately, both grouping notations restrict designs to a single diagram representing the model. This diagram can be split in several views by collapsing/hiding parts of the hierarchy in separate views. It is however not possible that same component participates in several views focused on different aspects of the system, as is the case in UML models. Another disadvantage of the used grouping notations (especially the preferred parenthesis notation) is that for untrained eye it is quite difficult to spot borders of the constructs and to reconstruct exact control flow, especially in more complex examples. When a GML model is complete, it is always a tree-like hierarchy of constructs and wrapper processes as branches and user-defined processes as leaves. The resulting executable is always occam-like. GML is a design methodology and visual notation related to occam. As occam, it fails to use the full expressiveness of CSP. Most notable example is the inability to visualize designs shaped as finite state machine–like diagrams in a clear and intuitive way. The main difference compared to simple visualization of occam-like hierarchies is that design freedom is enhanced by defining constructs as groups of binary relationships between processes. Compared to constructing an occam-like application as a hierarchical-tree, GML models seem to offer more flexibility as a design entering view, especially in early stages of the design, when exact borders of the constructs are not yet quite clear. GML models can express designs that are illegal or underspecified or ambiguous. Additional rules exist to check consistency of designs before translation to CSP is possible. One of the advantages of GML is that refining the data flow model with compositional relationships expressing the concurrency structure can be done without changing the 2D layout of original data-flow model. This feature makes a prospective tool based on GML suitable for application in a chain of tools, with the preceding tool in the chain producing data flow model e.g. based on some application-specific domain.
156
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
2. Visualizing CSP Process Expressions in SystemCSP In SystemCSP, processes are visualized using diagrams that contain basic process elements, lines representing synchronization/communication events, process labels and various control flow elements. Processes or process parts can be visualized via transparent rectangles (transparent-box approach) exposing their internals or via solid filled rectangles hiding internals of processes (black-box approach). 2.1 Basic Process Elements Basic process elements are: START, EXIT, STOP, Tau, EventSync, Writer, Reader, and EventAccept (Figure 3).
Figure 3. Basic process elements
The existence of a start event is implicitly assumed in CSP, but is not written down in CSP expressions. In the visual notation, however, it is very useful to mark the entry point of a process or a component. The START basic process element is marking the entry point of a component. The EXIT process is a point of successful termination (equivalent to the SKIP process in CSP). STOP is, as in CSP, the process that does not engage in any event. Note that the EXIT process is a compound process that communicates a successful termination event to the parent operator and then behaves as STOP. As in CSP, the Tau process represents an event internal to the component, and as such it is never offered to the environment. EventSync is an elementary process participating in event synchronization with one or more peer EventSync elementary processes executing in parallel. An event takes place when all participating EventSync processes are ready (rendezvous synchronization). Peer EventSync elements, participating in the occurrence of the same event, are connected via a dashed line. An EventSync process can in general initiate or accept events. The difference is not important from the CSP point of view, but sometimes in designs, it is handy to know which side initiates the interaction. For a side that can only accept interaction, the EventAccept symbol is used. The two other special kinds of EventSync symbols are Writer and Reader symbols that emphasize the direction of a unidirectional communication associated with event occurrence. The Writer and Reader basic processes are alike to channels in the occam approach. EventSync processes usually have associated an event label that specifies the event name and the details of the related data communication if any. In case when the data communication is present, the event label contains, in addition to the event/port name, the description of data communication. Data communication is described via “?” and “!” signs, representing the direction of communication, and the associated names of the local destination or source variables. As source of data, an expression involving multiple variables or a function that evaluates to a value of an appropriate data type can be used. One event occurrence can have multiple data communications associated.
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
157
Basic processes belonging to the same component can be interconnected via control flow elements based on CSP operators. 2.2 Prefix - based Control Flow Elements As prefix operator of CSP, the prefix control flow element is shaped as an arrow (see Figure 4). As in CSP, it introduces a constraint into event ordering by specifying the sequential order between an event and the rest of the process executed afterwards. The prefix control flow element can lead to another basic process (event), to another control flow element (with the exception of a prefix operator) or to a component.
Figure 4. Prefix and related control flow elements
Process labels (Figure 4) are used to visualize process entry and recursion points. Thus, a distinction is made between a process entry label and a process recursion label. A process entry label represents the entry point of a process and is attached via a prefix operator to an element of an SystemCSP diagram (with the exception of a prefix operator). A prefix leading to a process recursion label means that after the prefix operator, the process will continue behaving in a way as defined by the process entry label carrying the same name. The combination of process entry labels and process recursion labels allows natural visualization of recursions. Instead of using process recursion labels one can directly draw prefix arrows to entry points of appropriate processes (provided they are in the same view). However, using recursion labels makes diagrams more readable.
Figure 5. Combining prefix operator and process labels
In, Figure 5, the prefix leads to the description of another process: process P1 will perform event a and then behave as process P2.
158
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
2.3 Non-interacting Processes Non-interacting processes are processes that do not interact with their environment. Internally, however they can contain any number of interacting subprocesses and EventSync processes. A special kind of non-interacting processes is a computation process, which contains only pure computation code. A non-interacting process is not relevant for the CSP model on the level of abstraction where it is invoked and can thus be omitted in the resulting CSP description. Noninteracting processes are in SystemCSP specified in one of the ways depicted in Figure 6. The first three symbols provide non-interacting process descriptions inside a dedicated rectangle box. The types of description illustrated in Figure 6 are respectively: a textual description or the name of the action or the scenario it represents, a detailed description of internals, or a sequence of functions invoked. Using a brief textual description is especially useful in early stages of the design process, e.g. while specifying use-case scenarios (see Figure 6 for example). The last example illustrates that in case of computation process, it is allowed to skip the box element, and associate the description directly with the prefix operator. This is for instance convenient in some cases where the computation process is not so relevant for understanding the diagrams or in order to reduce number of displayed blocks.
Figure 6. Non-interacting processes
Figure 7 illustrates combining previously introduced elements to describe three processes that synchronize on certain events. Process entry points are marked with process entry labels carrying names P1, P2 and P3. Note that in case of event ev1, either process P1 or process P2 can initiate the interaction and that when both processes are ready to perform the event (rendezvous synchronization), the associated data communication will take place in both directions. The value of variable w of process P2 will be written into the variable x of the process P1 and the variable y from the process P1 will be written into the variable z of the process P2. Event activate_P3 is initiated by the process P1 and accepted by the process P3. This event has no associated data communication. In the third interaction, focus is on emphasizing the direction of unidirectional data communication. Therefore, the basic processes Writer and Reader are used. Rendezvous synchronization is implied and it is not considered important which side initiates the interaction. In Figure 7, note the distinction between dashed lines representing synchronization points that connect peer EventSync processes and solid, directed lines used for prefix control flow elements. Such a choice between dashed and full lines makes sense because while the EventSync processes are points of discontinuity where waiting for the synchronization with environment is done, the flow of control in prefix lines takes place without delay.
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
159
Figure 7. Combining basic processes with prefix control flow elements
2.4 Hiding and Renaming Operators The hiding and renaming operators are applied on a process and the result is again a process. For this reason, in SystemCSP those two operators are visualized as thin-bordered rectangle elements that relate the process label of the resulting process with the entry point of the process used as the operand. The environment of some process does not know its CSP description; it sees only the set of offered (ready) events. In CSP, one can apply the hiding operator on a process, resulting in hiding the chosen set of events from the environment and making them internal to the process. In SystemCSP, the hiding operator is specifying the set of hidden events after the “\” symbol. In addition, in a CSP process expression, it is possible to replace all occurrences of event/process names or event/process expressions with some other event/process names or event/process expressions (renaming operator). The symbol used in SystemCSP for renaming operator relies on notation that resembles to multiplying with the ratio of the new and old name (expression). This is an intuitively clear way to create the illusion of canceling the old expression and replacing it with the new one. The hiding and renaming operators are illustrated in Figure 8. The renaming element specifies that events on and off are renamed into light_on and light_off. The process created in this way is named ELECTRIC_LIGHT_SWITCH. In same example the hiding operator is applied on same SWITCH process in order to hide event off from some users. The process name under which such users can see SWITCH process is TURN_ON. This process will offer to its environment either event on or no event.
160
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
Figure 8. Example illustrating the usage of the renaming operator
2.5 IF Choice and Guarded Alternative An IF element (see Figure 9) specifies, inside square brackets, a Boolean expression that represents a condition. It has two prefix arrows leading from it: the TRUE and the FALSE paths. Generalization of the IF control flow element is the SWITCH control flow element. Its symbol is as the symbol for an IF element, but it can have more then two outgoing prefix control flow elements, each one with a different constant value associated. In CSP, the SWITCH element can always be represented by several nested IF choice operators.
Figure 9. IF conditional choice and guarded alternative choice
The guarded alternative is a process that offers to its relevant environment, a choice between several events. The branch starting with the chosen event will be followed. In SystemCSP, this element (see Figure 9) is depicted as a rectangle in which SyncEvent processes are half-emerged. From the outer side of the SyncEvent circles, prefix control flow elements lead to communication patterns representing the alternative control flow branches. Figure 10 illustrates the usage of an IF construct for specifying a recovery block fault tolerance mechanism. The essence of the mechanism is providing several implementations of the same functionality, possibly with different QoS (Quality of Service) levels. If the results obtained by executing block F(x) fail to pass the acceptance test, then the block G(x) is performed, and so on. In [12], a distributed recovery block mechanism is described in CSP. The example here is a visualization of a non-distributed version. If all recovery blocks
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
161
fail associated acceptance tests, then the error event is used to notify the client that the recovery block has failed to provide the required service.
Figure 10. Example with IF conditional branching element
Figure 11 illustrates the use of parameterized process labels and the use of logical conditions associated with a guarded alternative on the example of a non-negative counter. A counter is specified as an array of parameterized COUNT processes with parameter i being used as the counter value. After accepting the inc event, the COUNT(i) process will behave as COUNT(i+1). After accepting the dec event, it behaves as the COUNT(i-1) process. The event dec is guarded with the logical condition set to allow the counter value to be decremented only if the value of i is greater than zero.
Figure 11. Parameterized counter process
Figure 12 illustrates the SystemCSP visualization of a CSP example, which uses a guarded alternative control flow element. The example in Figure 12 also illustrates how the hiding operator is used to hide install/uninstall events from the program user that has restricted access rights. When the process Program is at its entry point, it is ready to
162
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
be installed; those users that see the program only via Restricted_program_use cannot engage in any interaction with program. After program is installed by some process from the environment that can see the process under name Program and thus can initiate install event, the user can use Start_Menu to open the program. A user that has no access restrictions can at this point also decide to uninstall the program (uninstall event). If the program is opened (openProg event), then it can be used (UseProg menu). Using the program initially offers two options: closing the program (closeProg event) or opening some document (openDoc event). Upon opening a document, one can work with it (Work menu). Working with the document includes making choices between several actions: updating the document (updateDoc event) saving the document (saveDoc event), closing the document (closeDoc event) or closing the entire program (closeProg event). Note that in Figure 12, the assumption is that the process selected for showing in the figure is Restricted_program_use and that because of that, the events install and uninstall are shaded.
Figure 12. Example illustrating guarded alternative operator
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
163
2.6 Start and Exit Control Flow Elements In CSP, operators like sequential, parallel, external choice and internal choice are used to combine two or more processes into a new process. In SystemCSP, control flow elements are introduced to represent those operators. The operators and theirs operands are in CSP grouped via parentheses. Instead of using explicit grouping symbols (like parenthesis bubbles or boxes in GML), SystemCSP chooses to merge the START and EXIT events of the composition with CSP operators, creating in that way an extended set of control flow elements as illustrated in Figure 13. Every process composed via one of the CSP operators has an implicit START event and either a termination (EXIT) event or a process recursion label leading to the entry point of some other process. In a sequential combination of processes, the EXIT of one process is triggering the START of the next one in line. START event of a sequential composition corresponds to the START event of the first element in sequence and the EXIT event of the composition corresponds to the EXIT event of the last subprocess in sequence. In case of Parallel and Choice operators, a START event is a point of forking control flow to branches and an EXIT event is a point of joining control flows of branches. In a parallel combination, all involved processes synchronize on both START and EXIT events. In case of a choice, only one branch is executed. Thus, the pair of open and close parentheses, bounding the scope of a CSP operator, actually corresponds to the pair of control flow elements representing synchronization on START and EXIT events.
Figure 13. START and EXIT grouping symbols
A special kind of a START grouping symbol is the FORK symbol that branches control flow on two or more branches. A special kind of EXIT grouping symbol is the JOIN symbol, where control flow branches are joined. Often, but not always a FORK element is paired with an appropriate JOIN element.
Figure 14. FORK and JOIN control flow elements
164
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
FORK and JOIN symbols are specified in association with branching/joining control flow elements in one of the two styles depicted in Figure 14. The look based on rectangles is more convenient when additional details need to be specified (e.g. synchronizing alphabets related to a Parallel operator). The look based on lines is more convenient to produce a UML-like activity-diagram look-and-feel. One or more SEQ START and SEQ EXIT symbols can be associated with a single prefix control flow element. Comparing the CSP expression and its SystemCSP representation in Figure 15, one can see that all brackets used to group CSP operators with operands are present in symbols associated with control flow elements. Process P1 will perform event ev1 and then behave as a sequence of process P and a process constructed as a sequence of process T and parallel composition of processes Q and R.
Figure 15. Grouping in SystemCSP
In the example in Figure 16, the FORK interleaving PAR is not paired with a JOIN interleaving PAR. In this example, the message forwarder process P needs always to be ready to accept new messages and can send received message at some later moment of time. Therefore, after receiving a message it spawns a copy of itself and puts it in a parallel with process Out(x) that is forwarding the already received message. This example requires in fact dynamic process creation, because in each recursion a fresh copy of process P needs to be created. This recursion is the reason why the black-box notation is used and not a process recursion label. A JOIN element can be drawn, but in reality, it will never be reached because new instances of P will always be created.
Figure 16. Message forwarder as an example for dynamic process creation
The same non-negative counter example of Figure 11 is implemented in Figure 17, using a FORK choice element instead of a guarded alternative element. The only difference compared to the guarded alternative based design, is that events offered to the environment (inc and dec events) are not considered to be a part of the control flow element but instead are considered part of branches to which the FORK choice control flow elements lead. The guarded alternative of CSP is thus in fact a special kind of external choice
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
165
operator and can always be replaced with a FORK external choice. The opposite is not the case. However, a guarded alternative is more convenient way of specifying finite state machine – like designs.
Figure 17. Counter example using FORK choice
Figure 18 represents a complex process named Race that contains both PAR and CHOICE control flow elements. Race is a parallel combination of two processes: RaceCtrl managing the race and Runners being a parallel composition of two runners participating in the race. Both runners are described via the same process description specifying that they will engage in the start event, and in the events 100m, 200m and finish. The parallel control flow element named Runners specifies that its subprocesses - two runners - actually synchronize only on the start event. This Start event is initiated by the race control mechanism (RaceCtrl process). The race control mechanism will on request of the runners deliver them the current time on the milestone events 100m, 200m and finish. When both runners have engaged in the event finish, the count variable becomes equal to the number of runners and the Race process is finished.
166
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
Figure 18. Example combining Par and Choice elements
The internal choice (symbol given in Figure 13) is different from the external choice in the sense that it will first internally make a choice and then offer only the chosen branch to the environment. The internal choice is not so useful for implementation, but it represents a powerful abstraction mechanism; It is used when one needs to specify that some process will in some, still unknown way choose one of several branches. 2.7 Exception Handling Layer Handling exceptional situations that occur during the execution of a process is an important issue for well-designed programs. In the software development world, the implicit agreement exists that in some way the part of code/design, which specifies/implements handling exceptional situations, should be visually isolated as far as possible from the design/code specifying normal execution. In general, exceptional situations in some process can be handled by attempting recovery within that process (recovery model) or by aborting the process and executing some other process instead (termination model). CSP defines the interrupt operator that covers the termination model. Process P 'i Q is a process that behaves as process P until either P terminates successfully or until an event occurs that activates process Q. In the latter case, further execution of process P is aborted and process Q is executed instead. Despite its name, the semantics of the ' operator is much closer to the termination model of exception handling than to interrupt handling, since it implies termination of the left hand-side operand.
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
167
Figure 19. Take-over operator
SystemCSP adopts the interrupt operator, but under the name take-over operator. In Figure 19, in the first case the process Q can take over process P. The second example in Figure 19, illustrates the fact that the take-over interrupt is associative by definition. Process Q can take over process P. Process R can take over both P and Q. 2.8 FSM-like Diagrams in SystemCSP If only events, prefix and guarded alternative elements are used, any CSP process can be mapped to both SystemCSP diagram and FSM and easily converted from one to another. In the example below, a one-to-one mapping between the SystemCSP diagram in Figure 20 and the FSM diagram in Figure 1 is obvious. The states of the FSM map to waiting done on events and guarded alternative elements in the Figure 20. This is illustrated by enumerating states in Figure 1 and the corresponding EventSync and guarded alternative elements in Figure 20 with numbers 0 to 7.
Figure 20. SystemCSP diagram
The diagram in Figure 20 made using a subset of SystemCSP has similarities to the FSM diagram of Figure 1. The main paradigm shift is emphasizing events instead of states. Events are not shown as transitions (as in the FSM), but as EventSync processes, which allow for direct line connections to peers in the environment or to ports of the component. Mapping from SystemCSP to CSP is more direct then it is the case for a FSM, where such a mapping requires distinguishing between states with exactly one outgoing transition and states where more then one outgoing transition is present. SystemCSP makes difference between internal and external choice, while FSMs imply external choice.
168
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
3. Non-CSP Elements of Notation 3.1 Representing Sequential Designs Computation processes are allowed to specify arbitrary complex sequential OOP designs. In principle they can be designed using UML or some other type of diagrams. We however aim to provide visual programming covering all parts of the design. For specifying computation processes, special diagrams inspired by UML sequence diagrams are proposed. In addition to elements from UML sequence diagrams, the diagrams representing sequential designs in SystemCSP introduce notation elements for specifying grouping statements in blocks, conditional branching and loops. Conditional branching is represented by a condition specified in square brackets. At any moment, for every condition either the true or false branch is depicted. The borders of the control flow blocks are visualized by means of large square brackets at the most left-hand side of the diagram (see Figure 21). Every function that is specified via a sequential design diagram is expected to have defined properties like: description of service that it provides, input/output parameters, preconditions, postconditions, and exceptions it may raise.
Figure 21. Design of sequential code
UML sequence diagrams can depict only one scenario. SystemCSP sequential design diagrams capture the complete control flow of the function that is visualized. Going through different scenarios takes place by toggling true/false values for the conditions, which results in the diagram being updated by displaying the relevant branch. UML sequence diagrams display nested function calls. In SystemCSP, every sequential design diagram focuses on modeling the internals of exactly one function (either a global function or a member function of some object) and thus only the function calls made directly from the internals of the visualized function are specified. Displaying nested
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
169
function calls is not relevant for the current abstraction level, especially because the used function calls are assumed to provide certain services regardless of their internals. A side benefit of the decision not to display nested function calls is that a source code model needed for construction of the diagram can easily be reconstructed from code (reverse engineering), since the contents of only one function needs to be parsed. When editing of the function is finished, code is generated and no additional data about the visualized function needs to be preserved. Introduction of special kind of diagrams for representing sequential designs allows visualizing the coding process, which results in a completely visual design process. Entering the design by switching continuously between mouse and keyboard slows down the design process, especially when one applies it in lowest design level where most of source code is. For really efficient coding, it is expected that the prospective tool allows users to completely enter a diagram using keyboard shortcuts (e.g. arrows…). The presented form of the diagram, with predefined placement areas for visual elements, provides a form that enables design entering via keyboard only. 3.2 Components In SystemCSP, in order to enhance reusability, a distinction is made between component types and component instances. In further text, the term component will be used to denote a component instance. A component is a structural unit containing variables, code blocks, subcomponents and glue code between subcomponents. It is a reconfigurable and reusable unit, which can be used in different contexts and different interaction scenarios. In the CSP sense the behavior of the component is described as a process. In our notation, components are enclosed in a rounded rectangle (see Figure 22) specifying the boundaries of the component. The entry point of component is always marked with a start event. Variables are defined in the component scope and represented via labels floating somewhere inside the component. A variable label contains the variable declaration consisting of its name and its type separated by a colon. Using variable labels allows omitting variable types in event labels. Ports are depicted as little rounded rectangles attached to the outer side of the component border. Inside a port is a port label carrying the externally visible name of the event or role associated with the port (see Figure 22).
Figure 22. Component, ports and interfaces
In Figure 22, the behavior of the component C1 is represented via process P1. In case of port1 and port2, the relation between the EventSync processes and the associated port is done via a dashed line and in case of port 3 and port 4 via using the same name. Let us consider a simple example of a Printer device driver component. Figure 23 shows a SystemCSP component specifying the same behaviour.
170
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
Figure 23. Printer process
Printer is both process and component, while Print menu and Init are only processes and not components. The Printer process is a component, because it is bounded via a rounded rectangle and has a specified set of ports. Components can also be depicted using black-box approach: for example, components P, Q and R in Figure 24. 3.3 Interaction Contracts An Interaction diagram is a diagram that specifies the way in which components interact. It usually contains a set of components (in black-box or transparent-box notation) centred around an interaction contract. An interaction contract, in addition to interfaces of processes involved in the interaction, captures the complete set of all possible interaction scenarios. This is possible due to the fact that both contract and roles of participating processes are in fact CSP processes. An interaction contract is actually an entity that exists as components do, with a distinction that the basic purpose of contract is providing a management support for the interaction between components. Components participating in interaction contracts must provide an implementation that is the refinement (in CSP sense) of the role specified in the interaction contract. Contracts usually have an internal component dedicated to managing the contract. For instance, let us imagine that the process Race from Figure 18 is specifying an interaction contract. In that case, the processes Runner1 and Runner2 could be considered to be roles implemented by some external components and all the rest would actually be an internal component managing the Race interaction contract. The simplest interaction contracts are: event, shared memory, buffered channel, Any2One, One2Any, Any2Any channels. Interaction contracts can be made for any kind of application specific scenario and can also be reused in same way as components. In this paper we focus on ways to visualize interaction contracts. A more elaborate discussion on interaction contracts and related concepts is provided in the companion paper.
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
171
Figure 24. Interaction - direct and via contract
In Figure 24, an interaction view is given with several components cooperating directly or via simple interaction contracts. Component A and component B interact directly and for simplicity, even the port elements are omitted in the diagram. Component B participates in interaction with components P and Q in a way specified in the interaction contract named Contract 1. It implements Role 1 of that contract. Component B also participates in an Any2One contract as one of several possible Producers. In same contract, component R plays the role of the Consumer. The ports displayed in Figure 24 are associated with provided and required roles. Since role implementation is provided by the component and required in the contract, the ports associated with roles are depicted at the outside of the components and as plug-in sockets on the contract side. Both Contract1 and Any2One contract in Figure 24 are visualized using the black-box approach. The internals of an interaction contract can also be visualized in an expanded (transparent-box) approach. Details of the Any2One interaction contract from Figure 24 are displayed using transparent-box approach in Figure 25.
Figure 25. Any2One interaction contract
Note that in the transparent-box approach, roles are depicted in separate areas and associated with appropriate ports. The contract specifies at same time a CSP description of
172
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
the behaviour required by the roles (Producer and Consumer roles in Figure 25) and the CSP process description of the contract management layer (Any2One area in Figure 25). The Any2One interaction contract in Figure 25 can buffer data arriving from several producers into the array b of size max used as circular buffer. In order to fulfill its task, it keeps track of the current size and positions of reader and writers using variables size, readIndex and writeIndex. The shaded Producer port means that more then one interleaving Producer can be attached. The same goes on for the shaded reader element in the guarded alternative inside the Any2One contract manager. 3.4 Distributed Systems – Allocation This research is focused on systems that execute on distributed computer platforms and need to interact with processes from the physical world (plant in the further text). Interaction between computing nodes takes place over network interconnections and interaction between application and the plant takes place via I/O interfaces. In our approach, plants and computing nodes are considered components, and networks and I/O interconnections are considered system-level interaction contracts. The system level interaction diagram specifies interconnections between the chosen set of nodes and networks. Some components can be replicated on several nodes. Alternative network routes are visible in such a diagram.
Figure 26. System-level interaction diagram
System-level interaction diagrams are usually used to illustrate the allocation of components and contracts participating in same interaction. Figure 26 illustrates a systemlevel interaction diagram concerned with the interaction of components A, B, C, D, E, F and G via interaction contract Contract1. In this configuration, Node 4 and Node 1 actually contain the same components. The connectivity of Node 1 to Nodes 2 and 3 is via network NW1 and Node 4 is connected to Nodes 2 and 3 via network NW2. This network topology suggests that this might be a design of a fault tolerant system that can survive a failure of either network NW1 or NW2. It can also survive the failure of either Node 1 or Node 2. Software components and contracts must always belong to some node. One can express this visually by assigning a special port to every software component. This port needs to be attached to some node kind of component. This is comparable to power supply ports in electric circuits as opposed to data signal ports. Thus, this port is also depicted in a
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
173
somewhat different way, namely as a port inside the component/contract giving a visual impression of a plug-in socket for putting the software component on top of the node.
Figure 27. Node ports
3.5 Specifying Time Properties Time properties can roughly be divided into two groups: time constraints and execution times. Time constraints specify that certain events take place precisely at some time or before some deadline. A deadline can be set relative to global time or as the maximally allowed distance in time between occurrences of two events. The deadline constraints are independent of the platform on which they are executed. SystemCSP specifies time constraints in square brackets, next to the element they are associated with. The keyword time is used to denote the current time in the system. In Figure 28, the start event of process P1 is scheduled to be triggered periodically at precise moment in time. Event ev1 is a point when time is observed and written to variable time1. Event ev2 specifies that its occurrence should take place at most d time units after time1, or in other words at most d time units after the occurrence of the event ev1.
Figure 28. Specifying time requirements
The execution times exist only when a software component is associated with the node on which it is executed. Execution times can optionally be visualized on SystemCSP diagrams. More often, an execution time or a time interval is considered to be a property of the diagram element (e.g. computation process block).
174
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
4. Related Work This section attempts to compare SystemCSP with several different approaches for specifying software systems. First, a comparison is made to UML, which is the de-facto industry standard for software development. Then a comparison is made to GML, as a predecessor of SystemCSP based on an occam-like approach and the concept of using binary compositional relationships. 4.1 SystemCSP vs. UML SystemCSP gives precise semantics based on a formal method. It attempts to make a single type of diagram with elements that can cover all different aspects relevant for componentbased design of real-time systems. SystemCSP basic elements are based on CSP operators and that gives possibility for formal verification. A UML design starts with designing use case scenarios, that is with defining actors – users of the (sub)system and use cases – services offered by subsystem. Use cases are entities that can be related via inheritance, and via include and extend dependency relationships. In use case scenarios, the focus is on what should be done and not on how it is done. Actual use-case scenarios are usually specified via collaboration or activity diagrams.
Figure 29. Closing document use case scenario
In System CSP, one can use interaction views to specify use-case scenarios. Black-box components of SystemCSP can in that case be used in similar way as actors of UML and interaction contracts can be used as use-case elements of UML. In principle inheritance, containment and dependency relationships between interaction contracts are naturally possible. Figure 29 illustrates how SystemCSP fits in the purpose of specifying use-case scenarios. Note that activities like displaying “Save changes” dialog, and performing
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
175
save document and close document activities are actually depicted as non-interacting processes. The reason is that on the observed level of abstraction, those activities do not communicate to the environment. However, inside they can contain events and processes. State machine diagrams of UML origin from the StateChart approach. Comparison between statecharts and SystemCSP is given in section 2.8. As it was concluded, the main paradigm shift is in putting focus on events instead of on states. Otherwise, statechart diagrams can be considered directly translatable to SystemCSP. SystemCSP contains elements that do not have counterparts in FSM diagrams. Activity diagrams of UML are also similar to SystemCSP diagrams. Actually, one can see activity diagram of UML as a subset of SystemCSP diagrams. An activity element of an activity diagram is essentially a black-box process or a component in SystemCSP. Fork and join of activity diagrams are PAR FORK and PAR JOIN in SystemCSP. A control flow arrow in an activity diagram is a prefix element in a SystemCSP diagram. A branch in an activity diagram is an IF choice in a SystemCSP diagram and the data flow in an activity diagram has in SystemCSP the more precise semantics of EventSync peer synchronization and data exchange. Finally, interaction diagrams of UML are not directly comparable to SystemCSP because the basic paradigm is different. While UML is tied to OOP, SystemCSP is related to the CSP parallel programming model. As control flow in UML is a sequential invocation of operation or signaling events from parallel threads, CSP is based on rendezvous message passing. Still one can find some similarities in the purpose of those kinds of diagrams. Collaboration diagrams emphasize structural aspects and the associations between objects via which messages are transferred. Timeline ordering is specified by attaching numbers to operations. Sequence diagrams forget about structural aspects and emphasize time ordering of messages by associating vertical dimension of the diagram with time. Both collaboration and interaction diagrams have a limitation that basically they can capture only a single timely ordered trace of messages – thus they are most useful as illustrations of part of some interaction. SystemCSP focuses on expressing synchronization points between peer processes and not on displaying only one timeline trace of messages as a diagram. This principle covers all possible traces in a single diagram. Computation processes are on a lower level where there is no possibility for concurrency and they can be designed using any kind of UML diagram. However, in order to utilize maximum of it and be able to perform automatic transformations from graphical representation to code and vice versa, the special kind of diagrams loosely based on ideas from UML sequence diagrams is proposed in section 3.1.
4.2 SystemCSP vs. GML SystemCSP started as an attempt to enhance the expressiveness of GML, especially to include state machine behavior. However, SystemCSP diverged to a completely different notation based on CSP. SystemCSP is capable of expressing any CSP behavior. This is a consequence of the different paradigm where a process is, namely the same as in CSP, considered to be just the behavior and not a tangible entity as it de facto is in occam and GML. Instead of using binary compositional relationships, SystemCSP introduces control flow elements. This approach seems to give somewhat better readability concerning observing control flow patterns. START and EXIT pair of control flow elements determine boundaries of the constructs.
176
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
Besides cluttered readability, caused by using only binary relationships and no control flow elements, two main deficiencies exist in GML: lack of explicit support for component based design and restriction to the occam subset of CSP. Control flow elements augmented with grouping symbols as introduced in SystemCSP, also create the opportunity to scatter same components in different views concerned with different aspects of the system. More details on how is that actually done can be found in the companion paper[1]. Compared to CSP, the occam subset of CSP offers only limited expressiveness for specifying and designing concurrent systems. As a consequence, GML as occam-oriented approach suffers from same limited expressiveness. The introduction of process entry labels and recursion process labels in SystemCSP is one of the elements that enable full expressiveness of CSP. While GML does not allow recursions other then loops, SystemCSP allows mutual recursions to be specified. Process labels also help to make diagrams more readable by omitting the prefix lines connecting recursion invocation to recursion entry point. In SystemCSP, it is also possible to use a FORK kind of element (e.g. guarded alternative element) without the corresponding JOIN element. This lays foundation for constructing finite state machine-like diagrams. GML has, as occam, only the ALT kind of choice. ALT is a choice between two processes with control flow continuing on same place after a chosen alternative regardless of what was the choice. SystemCSP follows strictly CSP semantics and introduces a guarded alternative control flow element that essentially forks alternatives without forcing the existence of common join place. This allows building diagrams alike to FSM diagrams. GML has advantage that for visualizing given design, total number of used elements can be less, because control flow elements are present as relationships, and not as explicit elements. But this advantage comes with already mentioned consequence that exact control flow is more difficult to observe. In that sense a choice between SystemCSP and GML is a tradeoff between the efficiency of notation (measured in the number of used elements) and its readability. Finally, SystemCSP does not have to be complete substitute for GML designs. Parts of designs can still be entered in GML. Since GML is based on occam which is subset of CSP and SystemCSP is based on CSP, designs from GML can be expressed in SystemCSP or made understandable to a prospective SystemCSP based tool. Both kinds of designs can, according to their own set of rules, be used for code generation of both source code and CSPm scripts, allowing mixed SystemCSP and GML design. The only restriction is that while GML designs can be nested in SystemCSP designs, the other way around is not possible. 5. Conclusions A novel way of component-based system description founded on the CSP process algebra is proposed. This notation can represent visually any CSP description and any system constructed via the presented notation can be expressed as a set of CSP descriptions. Components are distinguished from interaction contracts. Finally, a way to specify exception handling, distribution properties and time properties is introduced. The notation is compared to relevant related methodologies. A library and a tool that will offer support for SystemCSP are under development. To fully exploit benefits of the SystemCSP notation, a tool should be capable to perform automatic code generation both in a form of executable code and CSP scripts. The tool should also provide support for a simulation framework with animation capabilities.
B. Orlic and J.F. Broenink / SystemCSP – Visual Notation
177
Simulation can be performed either by interpreting the model inside the tool or by obtaining feedback from running the generated executable. Further work is to test the SystemCSP notation on more complex examples. We use our Production Cell [13] setup as a study case. More information about the developments concerning the SystemCSP tool and library will be available on www.ce.utwente.nl web site. Acknowledgments The authors are grateful to Job van Amerongen, Marcel Groothuis, Matthijs ten Berge, Dusko Jovanovic and Peter Visser for their valuable comments and suggestions regarding the SystemCSP theory and contents of this paper. References [1] B. Orlic and J. F. Broenink, "Interacting components", presented at CPA 2006. [2] A. W. Roscoe, The Theory and Practice of Concurrency: Prentice Hall, 1997. [3] G. H. Hilderink, "Graphical modelling language for specifying concurrency based on CSP," IEE Proceedings Software, vol. 150, pp. 108-120, 2003. [4] C. Szyperski, "Component Software: Beyond Object-Oriented Programming," 1998. [5] S. Schneider, Concurrent and Real-Time Systems: The CSP approach: Wiley, 2000. [6] J. Magee and J. Kramer, Concurrency: state models & Java programs: Wiley, 1999. [7] G. Booch, J. Rumbaugh, and I. Jacobson, The UML Reference Guide: AddisonWesley, 1999. [8] P. Marwedel, Embedded system design: Kluwer Academic Pubilshers, Dordrecht, Nethelands, 2003. [9] C. F. J. Lange and M. R. V. Chaudron, "In practice: UML software architecture and design description," IEEE software, vol. 23, pp. 40, 2006. [10] C. Crichton, J. Davies, and A. Cavarra, "A Pattern For Concurrency in UML," presented at FASE, 2002. [11] G. H. Hilderink, "Managing Complexity of Control Software through Concurrency," vol. PhD: University of Twente, 2005. [12] W. L. Yeung, S. A. Schneider, and F. Tam, "Design and verication of distributed recovery blocks in CSP", University of London 1998. [13] L. S. van den Berg, "Design of a Production Cell setup", University of Twente, 2006.
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
179
Interacting Components Bojan ORLIC and Jan F. BROENINK CTIT and Control Engineering, Faculty of EE-Math-CS, University of Twente P.O.Box 217, 7500 AE Enschede, the Netherlands {B.Orlic, J.F.Broenink}@utwente.nl
Abstract. SystemCSP is a graphical modeling language based on both CSP and concepts of component-based software development. The component framework of SystemCSP enables specification of both interaction scenarios and relative execution ordering among components. Specification and implementation of interaction among participating components is formalized via the notion of interaction contract. The used approach enables incremental design of execution diagrams by adding restrictions in different interaction diagrams throughout the process of system design. In this way all different diagrams are related into a single formally verifiable system. The concept of reusable formally verifiable interaction contracts is illustrated by designing set of design patterns for typical fault tolerance interaction scenarios. Keywords. SystemCSP, CSP, components, contracts, contexts, fault tolerance, design patterns, formal methods, graphical modeling, simulation, hierarchical verification.
Preamble This paper puts focus on the component related part of the SystemCSP design methodology. Although topics discussed here are presented in a self-sufficient way, full understanding of the paper might depend on familiarity with the SystemCSP graphical notation presented in the preceding paper [1] that puts focus on introducing visual elements of the SystemCSP methodology. Introduction Component-based software engineering is in practice most often based on the client-server architecture model, where one component (server) provides a certain service and the other component (client) uses that service. In some application areas, typical generic patterns are captured in the shape of standardized and precisely defined client side and server side interfaces (e.g. OPC [2]). This allows system integration based on components supplied by different vendors. Integration efforts are minimized as long as the used components adhere to these prescribed interfaces. In a client-server system, the contract specifying interaction scenarios and adjustable parameters of service delivery is implicit and partially reflected in the interface definitions of the provided and required services. Sometimes those interfaces also offer services for contract parameter negotiation. Making an explicit entity that implements such a contract is considered to be unwanted overhead. This approach is justified for data processing systems
180
B. Orlic and J.F. Broenink / Interacting Components
with clearly directed data flows starting with a client’s request to the service provider, which can, in order to provide its service, further delegate part of its task to some other service provider(s) and in that way act as a client of the next component(s) in the client/server chain/tree. The obtained results travel in the opposite direction. Complex client/server systems may however require existence of components that provide the management of interaction between several involved components. Components managing interaction of other components are in fact specifying and implementing explicit contracts governing interaction. For instance, a typical case would be providing, for fault tolerance reasons, redundancy in the form of replicated server components and an additional component/contract governing the interaction of the involved components. Sometimes, e.g. in complex control applications, interaction between components is not natural to structure as a chain or tree of clients and servers. For instance, devices in a industrial production cell system need to cooperate as peers in order to provide a result. Every participating device has a precisely defined role, but it is not always clear what is the service, and if some component is in that interaction playing the role of a server or of a client. Instead, interaction between components is an interaction of peers that work together to achieve some higher-level behavior. In those situations, a structured approach is to introduce entities that will manage and supervise interactions between components. Such an entity is in fact defining an explicit contract between the involved components. SystemCSP introduces interaction contract as a vehicle to manage interaction between components in structured and formally verifiable way. Interaction diagram depicts set of components centered around interaction contract. Execution diagram focuses on control flow elements that determine possible execution orderings of components. Control flow elements used in execution diagrams and binary relationships used in interaction diagrams are kept mutually consistent, allowing the same component to specify its execution relationship with other components in the interaction diagrams it participates in. Section 2 of this paper provides information about related research efforts that were taken into consideration in designing the component framework of SystemCSP. Section 3 provides detailed information on the component framework introduced in SystemCSP. Section 4 describes some well-known fault tolerance design patterns in the shape of reusable interaction contracts, with a precise formally verifiable specification. At the end conclusions and recommendations for future work are presented. 1. Related Work In [3], Coordinating Atomic Actions are introduced as a way to structure safety-critical systems involving complex concurrent activities. A coordinated atomic action (CA action) is an entity in which two or more threads of control implementing roles of the participating components meet and synchronize their activities performing atomically set of operations on a set of objects belonging to the CA action entity. In this way, a CA action behaves as a transaction and represents a general framework for dealing with faults and providing ways of recovery. Obviously, a CA action managing interaction contains more information needed for handling composite exceptional occurrences than any of the participating components in isolation. This makes CA actions a structured design pattern convenient for usage in safety-critical systems. The CA action design pattern is in [3] illustrated on a model of the Production Cell case study. In [4], a formal contract is introduced as a design pattern that manages interaction among components in a side-effect-free way. A formal contract is defined as a state machine that codes interaction between components relying on a system of asynchronous modification requests from components to contract and state change notifications from
B. Orlic and J.F. Broenink / Interacting Components
181
contract to components. The contract is promoted as an interaction entity that should substitute occam/CSP channels, since its ability to capture a complete N-directional specification of interactions between involved components makes it superior to channels usage. It is suggested that it is possible to transform a formal contract, being a state machine, into a CSP specification allowing in that way formal checking of interaction patterns managed by contracts. The usage of such a formal contract is foreseen as support useful during the full development cycle. The paper reports that usage of formal contracts in a real-life software problem resulted in a significant reduction of complexity and elimination of some typical problems related to unstructured use of concurrency. WRIGHT [5] is an Architecture Description Language (ADL) that relies on CSP to describe the architecture of software systems. Basic abstractions of WRIGHT are: components, connectors and configurations. A component consists of two parts: computation and interface. The interface consists of ports. Each port is an interaction in which component can participate. The use of ports is to allow consistency checking and to guide programmers in the use of the associated component. The connector specifies the interaction between a set of components. It does that by providing the description of Roles representing expected behavior of participants and the Glue representing the specification on how the participating roles cooperate in the scope of the interaction managed by the connector. A Configuration is a set of component instances combined via connectors. WRIGHT is a textual way to describe architectures. No visual notation is provided. In [6], making components contract aware is argued in order to be able to trust components employed in mission-critical applications. This paper deals with client-server architectures and identifies four levels of increasingly negotiable properties in the contract specification. On basic (syntactic) level¸ there is the interface description language (IDL) – like description of contract properties. This includes services/operations a component can provide/perform, associated input and output parameters and possible exceptions that may be raised during operation. Component frameworks that support (only) first-level contracts are for instance: CORBA, Component Object Model (COM), JavaBeans. Level 2 contracts are behavioral contracts. These contracts offer the possibility to specify pre-conditions, post-conditions and invariants for the performed operations. Typical examples of level 2 contracts are “design by contract” in the Eiffel language [7, 8], and Object Constraint Language (OCL) of UML. Level 3 are synchronization contracts. Contracts on this level specify behavior in terms of synchronizations and concurrency, e.g. whether a dependency between provided services is parallelism, sequence, shuffle, etc. The Service Object Synchronization (SOS) mechanism is used as an example for contracts on this level. Finally, level 4 contracts allow dynamic adaptation of the contract based on Quality of Service (QoS) requirements. TAO (the adaptive communication environment object request broker) is used as an example for a level 4 contract. Although the “four level” classification of contracts was introduced for implicit contracts of the client-server architecture, the classification is still a useful way to define more precisely the position of our notion of a contract. 2. Basic Concepts of the SystemCSP Component Framework 2.1 Components In SystemCSP, besides a process describing the normal execution mode, components optionally contain a process managing possible reconfiguration scenarios and a process specifying the recovery activities upon occurrence of exceptional situations.
182
B. Orlic and J.F. Broenink / Interacting Components
2.2 Interaction Contract The interaction contract of SystemCSP is in fact the same concept as the connector concept in WRIGHT. The name interaction contract was chosen because it is more general and suits better its purpose then the name connector. Indeed, most simple interaction contracts (event, Any2One channels, buffered channels, etc) can be classified as connectors. But the entity specifying interaction among devices in an industrial production cell is more then just a connector. Compared to formal contracts of Boosten, interaction contracts are directly implemented as CSP processes and there is no need for transformation in order to achieve formal checking. In addition, interaction contracts are not considered a substitute to channels, but a higher-level primitive described via event(channel)-based interactions with/among participating components. The CA actions safety pattern [3] illustrates the importance of a centralized entity maintaining interaction between participating components and thus serves as a motivation for introducing interaction contracts as separate, explicitly existing entities. Compared to CA actions, interaction contracts are considered to be a more structured approach because they achieve the same purpose, but rely on a safer and more structured way to use concurrency. As in CA actions, one of the main powers of interaction contracts is the opportunity to nest handling of exceptional situations in contract facilities, where more knowledge is available about the current state of interaction than in participating components in isolation. An interaction contract is an abstract entity whose main purpose is specifying and managing interactions between components. By defining interaction as an abstract entity (that can be instantiated in the same way as components can), a possibility for reuse of the design patterns captured in a form of interaction contracts is introduced. An interaction contract prescribes roles of the participants and offers additional interaction management support. It can introduce additional constraints in the way component instances interact, provide buffering support and exception handling facilities. A contract normally consist of three phases: checking preconditions, performing action and checking postconditions. An action can contain an interaction pattern specified via events or via subcontracts. In the light of the four level of contracts classified in [6], interaction contracts provide natural support for the first three levels and the possibility to build an application specific Quality of Service layer on top of the first three layers. On the basic contract level, an interaction contract is described via event/channel interconnections, operations/actions they represent with associated input/output parameters and a defined set of possible exceptions that can propagate via the event/channel infrastructure. The second level is achieved by dividing every contract into three parts: optional checking of preconditions, the mandatory action and optional checking of postconditions. The third level is naturally supported by the CSP structure of the contract including the participating roles and the optional contract manager. In addition to the used channel/event ports, the CSP description of the contract encapsulates all possible scenarios for contract execution. Level 4 contracts can be built as an additional layer in the application specific contracts. General design patterns can be made to construct reusable QoS contract layers. An interaction contract specifies roles for which a component willing to participate must provide an implementation. The implementation of a role must be a (trace) refinement in the CSP sense of the role description required in the contract. In CSP, the implementation is considered to be the (trace) refinement of the specification when a set of event traces that can be produced via execution of the implementation process is a subset of the set of traces that can be produced by the execution of the specification process. In other
B. Orlic and J.F. Broenink / Interacting Components
183
words, a behavior of an implementation must stay within the behavior defined in the specification. This approach allows step-wise refinement during the design process and formal verification of even early stages of the design. The same role/component can be represented via several process descriptions on different abstraction levels ranging from a high-level specification to a low-level implementation. The refinement property between process descriptions on different levels can be formally verified. Section 3 illustrates the concept of an interaction contract by introducing several fault tolerance design patterns in the form of reusable SystemCSP interaction contracts. 2.3 Contexts A component can contain subcomponents and contexts. Contexts are nested inside components. There are two kinds of contexts: physical contexts and virtual contexts (e.g. a mind context or the internet). Via passages, new contexts can be opened while remaining in the previous contexts or a component can move from an old context into a new one (alike to following an internet link in new window or in existing one). For instance, a human-like component can at the same time be in at most one physical context, and in any number of virtual contexts. Self-aware components (e.g. humans or robots) are upon entering some context faced with a choice of interaction contracts offered by the context. Interaction contracts are abstract definitions. Instances of interaction contracts (in further text, the name interaction contract is used) are located inside contexts. Contexts provide concrete environments in which interaction contracts can appear. A contract is abstract and when an instance of it is mapped onto some context, its notions needs to be mapped to objects in the context. For instance, a football game can be considered an interaction contract with certain rules. The notions used in the description of rules (goals, terrain, etc) must be mapped to existing objects in the context.
Figure 1. Contexts and contracts
2.4 Interaction and Execution Diagrams The description of every component contains one execution diagram and one or more interaction diagrams. The execution diagram specifies the control flow, which determines the possible orderings in which subcomponents are executed. The interaction diagram focuses on interaction among subcomponents and contains a set of components grouped around interaction contract.
184
B. Orlic and J.F. Broenink / Interacting Components
SystemCSP aims to allow the same component to participate in several different interaction diagrams. This is in a way similar to UML diagrams where one can focus on certain aspects of some entity in one diagram and on other aspects in other diagrams. Unlike in UML notation, where there is no relation between different diagrams, in SystemCSP all interaction views inside one component provide a single, consistent, formally verifiable, model of the system. This model is reflected in the execution diagram of the component. In Figure 2, on the left-hand side, two interaction diagrams are given, and on the righthand side the associated execution diagram is shown. Components B and C appear in both interaction diagrams because they engage in both interaction contracts. Component B is in the upper interaction diagram depicted via the black-box approach, and in the lower one via the transparent-box approach. Contract 1 from the upper interaction diagram is depicted via the transparent-box approach and Contract 2 from the lower interaction diagram is depicted via the black-box approach.
Figure 2. Specifying execution relationships in interaction views
Components participating in an interaction diagram do not exist in isolation, they are nested in some parent component that specifies a set of their possible execution orderings in the associated execution diagram (e.g. right-hand side diagram of Figure 2). Thus there is always some execution relationship between participating components. The binary relationships of GML [9, 10] served as an inspiration for introducing binary execution relationships between components in interaction views. The experience with using GML, showed that specifying binary relationships among components is very useful in early stages of the design, but is somewhat cluttering readability in later phases when focus is on control flow.
B. Orlic and J.F. Broenink / Interacting Components
185
Figure 3. START and EXIT control flow elements
In the companion paper [1], a set of control flow elements is introduced (see Figure 3). The execution diagram on right-hand side of Figure 2 illustrates the usage of control flow elements to specify the execution pattern of a component whose internals are represented. Note that the START and EXIT elements can also be interpreted as binary relationships between any two related subprocesses. Looking at components A and B in the execution diagram, one can notice that they are placed immediately after the FORK PAR control flow element, each one on start of one of the parallel branches. One can also interpret this as a FORK PAR binary relationship between components A and B. Thus, the relationship between A and B is stronger then PAR, because it implies that those two components are first in parallel branches and thus synchronizing on the START event. In the interaction diagram, dashed lines adorned with binary relationship symbols are used to specify a binary execution relationship between components. Components A and B are related in the upper interaction diagram with dashed line adorned with a FORK PAR symbol. Components B and C are also in different branches, but instead of on START, they synchronize on a termination (EXIT) event. Thus, the binary relationship relating components B and C is of type JOIN PAR. The FORK choice control flow element specifies that a choice is offered between component D and the parallel composition starting with components A and B. Thus, one can say that there is a FORK external choice binary relationship between components D and A and also between components D and B. Actually, components B and D are in addition also related via a JOIN external choice binary relationship. When between two components both a FORK and a JOIN of the same kind of a binary relationship are present, we introduce one symbol instead of two and call such binary relationship STRONG relationship. The symbol for a STRONG relationship (see Figure 4), in addition to the symbol of the operator, contains both FORK and JOIN symbols. The relation between components B and D is thus a STRONG external choice. Beside STRONG, WEAK binary execution relationships exist as well. A WEAK PAR exists between elements in parallel branches that do not synchronize on START or EXIT events. Components related via a WEAK PAR binary relationship can however synchronize on user-defined events. The WEAK interleaving PAR specifies that there is no synchronization at all between components belonging to parallel branches. As it can be seen in Figure 4, besides PAR binary relationships, sequential and choice binary relationships also have WEAK and STRONG forms in addition to START and STOP forms. The sequential binary relationship in addition to its STRONG and WEAK form also has a PRECEDENCE form. It specifies that the involved components are executed one after another, but not necessarily immediately after each other.
186
B. Orlic and J.F. Broenink / Interacting Components
WEAK and STRONG relationships are only implicitly specified in control flow diagrams, with STRONG ones represented via pairs of START and EXIT control flow elements and WEAK ones only as a relative position in the diagram. Still, execution diagram and a set of binary execution relationships specified in related interaction diagrams carry essentially the same information.
Figure 4. STRONG and WEAK relationships
Interaction diagrams can visualize an order of grouping (see numbers on the ends of relationships in the lower interaction diagram displayed in Figure 2) with lower number bearing the meaning of the closer execution control flow element. Note that in this ordering, the same number can be used only for exactly the same kind of JOIN/FORK contract. By aligning elements of the execution diagram in such a way that the control flow goes downwards with FORKs and JOINs as horizontal lines connected via prefix arrows to the involved components below and above, a form that resembles the UML activity diagram is created (see the execution diagram in Figure 2).
Figure 5. Transformation rules
B. Orlic and J.F. Broenink / Interacting Components
187
In the general case, it is relatively straightforward to keep consistent execution relationships from views specifying interaction and execution. Components related via parallel or choice binary relationships in interaction diagram form separate branches in execution diagrams. Components related via binary sequential relationships are belonging to the same branch, with the one that is executed before located closer to the top of the branch. The intuitive set of rules for converting binary execution relationships between components into control flow elements of execution diagrams are illustrated for FORK kind of contracts in Figure 5. The set of rules for JOIN contracts is comparable. In principle, the design starts with WEAK execution relationships (PRECEDENCE, WEAK SEQ, WEAK PAR, WEAK CHOICE) which are then either gradually refined to stronger variants (START, STOP and STRONG kinds of binary relationships) that specify implicit grouping or they stay weak if that is the intention. An interaction diagram captures only the binary relationships relevant for the given interaction. Introducing an extended set of execution relationships with START (FORK), EXIT(JOIN), WEAK and STRONG forms, enables that one can specify a binary relationship between two components isolated from the rest of the environment. In practice, this allows that the same component can appear in different interaction diagrams in a way that is consistent across diagrams. Designs are in this way centered around interactions, with elements from different interaction diagrams strongly related. An execution diagram of a component is built incrementally, throughout the process of design, by adding restrictions in different interaction diagrams. All different diagrams are related into single formally verifiable system. 2.5 System Level Simulation SystemCSP targets design of applications that run on top of distributed computer platforms and that interact with the physical environment (plant in further text). Interaction between computing nodes takes place over network interconnections and interaction between the application and the plant takes place via I/O interfaces. In our approach, nodes and plants are captured as components, and networks and I/O interconnections as system-level interaction contracts. Defining the concurrency skeleton of a complete system in SystemCSP, allows one to perform/manage system level co-simulation between different domains. The intention to use SystemCSP for system-level specification, design, implementation and co-simulation of distributed control systems gives justification for naming the complete framework SystemCSP. A similar approach was already tested in related work [11], where the system-level simulation framework for co-simulation of the complete distributed system was based on a occam-like approach. The focus point of that simulation framework was prediction of the influence of network delays on the behavior of embedded control systems. Execution times of code blocks were considered negligible compared to network delays and the network simulator was reused from the TrueTime [12] simulation framework. 2.6 Potential for Hierarchical Verification Consistent usage of interaction contracts on all hierarchy levels in the developed system has a potential to enhance the possibility to perform hierarchical verification. This is the case when an interaction contract specifies one complete self-sufficient interaction pattern that does not synchronize with the rest of the system. As such it can be formally verified as a separate unit in isolation from the rest of the system.
188
B. Orlic and J.F. Broenink / Interacting Components
Formal verification of an interaction contract is in fact constructing equivalent state machine for the composed roles and the contract management layer. Such an equivalent state machine can for instance besides all possible traces, also capture the time properties. A formally verified interaction contract can be substituted with a simplified interaction pattern relating involved components directly. For instance, in case when roles are composed via a STRONG parallel relation, it would suffice to replace, in all participating components, the complete role implementations with a single barrier synchronization point at those points where components engage in the roles required by the contract. If such simplification is systematically performed in bottom-up manner for all contracts and subcomponents inside some component, simplified equivalent state machines are obtained that represent the roles of this higher-level component. Applying this method consistently while progressing through the hierarchy of components in a bottom-up approach, it is possible to perform hierarchical verification, and in that way to decrease the significance of the state-explosion problem inherent in formal checking methods. This potential of interaction contract to serve as a vehicle for hierarchical bottom-up verification of systems is yet to be explored. 3. Fault Tolerance Design Patterns as Reusable Interaction Contracts In this section, a set of fault tolerance design patterns useful for the development of realtime safety-critical distributed systems is presented. Patterns are designed and used to illustrate the usage of interaction contracts as reusable units in the practice of software development. The second aim of this section is to test the capabilities and expressiveness of SystemCSP when used as a vehicle in visualizing and structuring concurrency during the design of complex concurrent systems. In fault tolerant systems, effort is made to design system that can continue providing required or degraded service despite the presence of faults in the system. A fault in a system can cause an erroneous state of some component. This error can further propagate and cause a failure of the expected service delivery. Faults can be transient and permanent. Fault tolerance can be seen as a process consisting of error detection, error containment (isolating error from spreading further), error diagnosis and error recovery [13]. In a SystemCSP-based system, functional error detection is naturally located in precondition and postcondition tests related to the execution of interaction contracts and subcontracts. Detecting errors in the timing relies on the watchdog design pattern. Upon detection of an error in an interaction contract, this contract can halt further progress of the interaction and in that way isolate the error from spreading further. An interaction contract is also a natural place to perform error diagnosis, since an interaction contract can possess more information about the current state of the interaction than the participating components on their own. The purpose of error recovery is to substitute an erroneous state with an error-free state. This state can be some previously saved state or its degraded part or it can be a new error-free state. Forward error recovery attempts to handle errors by finding a new state from which the system can continue further operation. Usually it is based on replication redundancy. Replication can be done in software or in hardware (replicated specialized hardware or complete nodes or network interconnections). Forward error recovery is predictable in terms of time and memory overhead and thus often used in real-time systems [14]. Backward error recovery handles erroneous states by restoring some previous errorfree state. Backward error recovery is especially suited for handling errors caused by transient faults. It has also the capability to handle unpredictable errors. The most widely used backward error recovery mechanism is checkpointing [14].
B. Orlic and J.F. Broenink / Interacting Components
189
Another useful fault tolerance design pattern is exception handling. It can have termination or resumption semantics. The take-over operator of SystemCSP [1] covers the termination semantics. The resumption semantics upon occurrence of an exceptional situation is in our case just delegating exception handling to another part of the same process. Exceptions that cannot be handled are propagated across component boundaries. A special design pattern is introduced to perform this in a clean and formally verifiable way. This section tries to introduce design patterns for some of the most important fault tolerance mechanisms: watchdog, replication, monitoring, event poisoning and checkpointing. First, the watchdog design pattern is introduced as a way to detect timing faults. Next design patterns that implement several different kinds of replica management are introduced. Monitoring as an important activity in safety-critical system is also introduced via a design pattern. The event poisoning design pattern is introduced as a way to transfer information about exceptional situations across component borders. At the end, a design pattern is introduced that uses checkpointing mechanism. 3.1 Watchdog Design Pattern The interaction view specified in Figure 6 illustrates the interaction between a user-defined component and the timing subsystem component via the watchdog interaction contract. The watchdog contract is used to detect timing faults and to initiate built-in recovery mechanisms.
Figure 6. Interaction diagram: using a watchdog interaction contract
3.1.1 Timing Subsystem Figure 7 introduces one possible design of a timing susbsystem. The purpose of this example is not to provide a ready-to-use design, but rather to illustrate how convenient SystemCSP is for making such designs. Besides, it introduces elements needed for understanding the working of the watchdog design pattern. The timing subsystem contains several processes executed concurrently. HW_TIMER is a process implemented in hardware that produces instances of hardware interrupt processes (HW_INT) in regular intervals. The HW_INT process synchronizes with the CPU on event int, invoking in that way TIMER_ISR. The process CPU acts as a gate that can disable (event int_d) / enable (event int_e) interrupts. When interrupts are enabled, event int can take place and as a consequence interrupt service routine TIMER_ISR will be invoked. In the case of an int_d event occurrence, the left branch in the guarded alternative of the CPU process is followed, which allows as a next event only the int_e event and thus the event int cannot be accepted, and as a consequence interrupts are not allowed.
190
B. Orlic and J.F. Broenink / Interacting Components
TIMER_ISR increments the value of the variable time that it maintains. TIMER_ISR also maintains a sorted list of processes waiting on timeout events. Processes in this list, for which the time they wait for is less then or equal to the current time, will be awaken using the wakeup event. The awoken processes will be removed from the top of the list. In the case the awoken process is periodic, it is added again to a proper place in the waiting list.
Figure 7. Timing susbsystem
Processes that are using services of the timing subsystem can, via the TIMER process, either subscribe (via event subscribe) to the timeout service or generate a cancel event to cancel a previously requested timeout service. Since these activities are actually updating the waiting list, this list must be protected from being updated in the same time by TIMER and TIMER_ISR processes. That is achieved in this case via disabling/enabling interrupts (int_d / int_e events). 3.1.2 Watchdog The design pattern for the Watchdog process (see Figure 8) relies on services provided by the TIMER process. A user of the watchdog contract must first initialize it by specifying timeout details in data communication related to the start event. Then the watchdog uses the timer.subscribe event to subscribe to receive the timeout service (one-shot or periodic depending on the mode parameter supplied by user) of the timing subsystem. After subscribing to the timeout service, the watchdog is ready to accept any of the three events – timeout from the TIMER_ISR process (wakeup event), or hit or cancel events initiated by the process guarded via this watchdog. In case when the user process initiates a
B. Orlic and J.F. Broenink / Interacting Components
191
cancel event, the job of the watchdog is finished and it can cancel its timeout service and successfully terminate. Event hit will update a flag that keeps track of whether the hit event took place before timeout occurred. When a timeout occurs, the status flag will be checked and if the event hit did not take place, further execution of the guarded process is interrupted with the abort event. Depending on the mode, the watchdog will either prepare itself for the next iteration by resetting the status flag or it will cancel the timeout service and successfully terminate. Figure 8 captures watchdog interaction contract with definition of contract manager and the specification of the roles TIMER and WD USER.
Figure 8. Watchdog interaction contract
3.1.3 One-shot Watchdog Use The process Watchdog user (depicted in Figure 9) implements wd_user role specified in watchdog interaction contract given in Figure 8. While required role describes interaction that allows for both periodic and one-shot use, the implementation of the role given in Figure 9 uses a watchdog in a one-shot configuration. First, the watchdog is activated via the event start. The next part of program is guarded by the “take-over” operator [1] – SystemCSP equivalent of interrupt operator from CSP. In case when the watchdog signals a timeout by initiating the abort event, the normal execution branch is taken-over by the branch that handles the watchdog timeout. In the normal execution branch, after the normal execution is finished, the hit event is initiated to update the status of watchdog process.
192
B. Orlic and J.F. Broenink / Interacting Components
Figure 9. Watchdog user in one-shot configuration
Figure 10. Symbol for using watchdog design pattern in one-shot configuration
Using the watchdog pattern is a useful safety design pattern, but it clutters the overview when one is interested in the normal execution flow only. For that reason, a special symbol, as depicted in Figure 10, is introduced as an abbreviation for the watchdog design pattern used in a one-shot configuration. The start and hit events and the way watchdog timeouts are handled, are considered to be part of the watchdog operator. Thus, they need not to be depicted when the normal execution flow is emphasized. Again, the watchdog operator is represented via one pair of FORK and JOIN control flow elements. The symbol used for watchdog operator is based on the combination of the take-over operator symbol and the letter T that implies timeout. This choice is made because watchdog usage is in fact using the take-over operator, where take-over can take place upon timeout events. 3.1.4 Periodic Watchdog Use In a periodic usage of the watchdog, the difference is in the fact that that watchdog schedules a periodic timeout and that the guarded part of the user process is repeated in periodic iterations (compare Figure 9 and Figure 11). The assumption is that the process block normal execution (see Figure 11) starts with waiting on the periodic time event. The symbol used to abbreviate the design pattern with periodic use of the watchdog design pattern is depicted in Figure 12.
B. Orlic and J.F. Broenink / Interacting Components
Figure 11. Watchdog user in periodic configuration
193
Figure 12. Symbol for using watchdog design pattern in periodic configuration
3.2 Replica Management In Figure 13, an interaction diagram is displayed that relates a client component with a replicated server components via a replicaMgr contract.
Figure 13. Replica management – interaction diagram
All code related to managing the replication is situated in the ReplicaMgr contract, which enables reusing the same components in different replication configurations. Replicas can be on same node or on different nodes, can be identical or based on different designs (N-version programming). The ReplicaMgr can be on the same node as some of the replicas or on a separate node. In this section, SystemCSP based designs are provided that specify “hot-standby”, “cold-standby” and “majority voting” types of ReplicaMgr.
194
B. Orlic and J.F. Broenink / Interacting Components
3.2.1 “Hot Standby” In this design pattern, upon receiving a request from a client, all replicas are activated, but only the result obtained from the fastest one is actually used. In the moment one of the replicas comes up with results, further execution of other replicas is aborted. The Replica Mgr first receives a request from a client and then it will distribute the request in parallel to all involved replicas. In order to protect the interaction contract from the influence of failing components, sending the request to every replica is guarded using the watchdog design pattern. The replicas then work in parallel, and the interaction manager waits for a limited time (again the watchdog pattern is used) for one of the replicas to produce result. This kind of waiting is realized using an external choice element. In case one of the replicas comes up with the result, the other two replicas are aborted. Since an attempt to abort execution or to involve in any synchronization with an offline component can lead to a deadlock, the process of aborting those replicas is again guarded by a watchdog.
Figure 14. “Hot standby”
Note that in case when the replicas are invoked periodically and contain state (e.g. if replicas are controller implementations), the state of the replica that produced the result should be communicated together with result to the ReplicaMgr. In the next iteration, the state from the previous iteration should be communicated to all replicas. This approach will prevent internal states of replicas to drift away.
B. Orlic and J.F. Broenink / Interacting Components
195
3.2.2 “Cold Standby”
Figure 15. “Cold standby”
In the “Cold-standby” design pattern, a request is first forwarded to the first replica. If the first replica does not accept the request in the predefined time, then the request is forwarded to the second replica, and if it also fails to accept it, further on to the next replica in a chain. After the request is accepted by one of the replicas, the ReplicaMgr waits for a reply for a predefined time interval. If the reply does not arrive, a request is sent to the next replica in the chain. If the replica replies, then the result is forwarded to the client. If no replica in the chain is able to provide the result, then the error event is initiated. 3.2.3 “Majority Voting” In this design pattern, the request is sent in parallel to all replicas. The sequence of sending the request to the replicas and obtaining the reply from it, is guarded by the watchdog pattern. In that way, a failing replica cannot block the ReplicaMgr process. The obtained results and status flags are used in the “majority voting” process block to make an agreement about the correct result. In case when it is impossible to deduce a result, the error event is initiated.
196
B. Orlic and J.F. Broenink / Interacting Components
Figure 16. "Majority voting"
3.3 Monitoring The monitoring design pattern enables safe monitoring of components and systems. Every process that needs to be monitored is associated with an EventLogger contract executing in Parallel with it. This EventLogger contract sends data further to the Monitor component. The Monitor component collects data from several monitored processes and can reason about different safety issues and invoke some safety measures if needed. Figure 17 displays the interaction diagram relating monitored process and monitor component via the EventLogger contract.
Figure 17. Monitor interaction pattern
In Figure 18, one can see that the monitored event is actually followed by an inserted additional event, which sends data to the EventLogger process. Note that monitored event can also be an internal dummy event, created only for monitoring purposes, and that in fact any value from the monitored process can be logged at a predetermined points in the process description via the EventLogger process. The EventLogger process logs data from the monitored process into a local buffer and upon the request transfers them further to the Monitor component. The Monitor component contains in shared memory the state of all variables relevant for its proper functioning.
B. Orlic and J.F. Broenink / Interacting Components
197
Figure 18. Details of monitoring interaction pattern
Figure 19 illustrates how the whole design pattern is abbreviated in designs in order to allow designers to focus on normal execution and hide away details of logging/monitoring. The symbol for the event that is monitored has rectangle around the monitored SyncEvent process. This gives an intuitive impression that it is a more complex process then just an EventSync process. The MON keyword is used to signify that the event is monitored. In addition, it is possible to specify the name of the monitor.
Figure 19. Monitoring symbol
198
B. Orlic and J.F. Broenink / Interacting Components
3.4 Exception Handling The termination model of exception handling (see the take-over operator in [1] ) is not always convenient because it destructs all the work performed in the aborted process. Often, one wants to first attempt recovery and to avoid the need to abort the process. In SystemCSP, the recovery attempt upon the exceptional situation is still part of the same process. However, it is convenient to separate this part of the design into a special process block and allow in that way a visual separation of normal mode and exception handling mode (EHM). If an invalid state is observed in normal mode, further control flow may be designed to lead to the EHM via a recursion label. Figure 20 depicts a case where EHMs of the contract and involved roles/components can interact and agree on the ways to handle the exceptional situation. The normal mode and EHM blocks are two visually separated parts of the same process. Role1 of Component1 offers both resumption and termination methods for exception handling. Resumption method relies on a jump to EHM part of the process that will attempt to handle an exception. The termination part is regulated by take-over operator and Abort procedure block. That is the reason that prefix arrows between those two blocks go in both directions. Contract named Contract1 in this case, is a single process that is also visually divided into two process blocks: normal mode and EHM. The power of the configuration, where contract and component are structured in this way, comes from the fact that the interaction contract has additional knowledge about the current state of the interaction and can also obtain/maintain info on the status of the participating components. In that way, the interaction contract can pinpoint more precisely on possible causes of the exceptional situation and propose, to the participating components, ways to handle it.
Figure 20. Cooperation of contract and component EHM layers
B. Orlic and J.F. Broenink / Interacting Components
199
3.4.1 Event Poisoning A convenient mechanism to pass information on the occurrence of an exceptional situation from a contract to the involved roles/components is to perform event poisoning. This concept origins from channel poisoning used for graceful termination in [15] and for transporting exception over process boundaries in the occam-like CT libraries [10, 16] [17]. Here, however, the mechanism is designed in a formally verifiable way. The events from the alphabet of some role participating in a contract that can produce exceptions are guarded for occurrence of exceptional situations. Guarding is performed by sending additional status information on every occurrence of the event that can be poisoned by an exception. In case an EHM of a contract needs to notify its participating role/component about an exceptional situation, it will initiate an event on which the component/role is waiting inside the interaction contract and send the exception info as a status. In normal mode of the role implementation, the obtained status is tested after every occurrence of the event guarded for exception and in case where an exceptional situation is detected, control flow is given to the EHM layer. From the EHM layer, after recovery, it is possible to return to the normal mode.
Figure 21. Event poisoning
To avoid cluttering designs with specifying testing of status info after every event occurrence on guarded EventSync processes, the special notation, as shown in Figure 21, is proposed for channels guarded for exceptional situations. The new symbol is a box around the eventSync process, with the keyword EHM written inside the box. 3.5 Checkpointing Checkpointing is a backward recovery mechanism that relies on correcting an erroneous state by rolling back to some previous correct state. When interaction of several concurrently executing processes is guarded from faults in this way, a domino effect can occur. In such a domino effect one process causes another process to rollback, the other causes a third one to roll-back, and so on, which can result in rolling-back the first process even further and so on. This makes asynchronous checkpointing of interaction unsuitable for real-time systems where execution must be predictable in the sense of time and memory requirements. The proposed design pattern relies on interaction contract as a manager that keeps the roll-back process of involved components synchronized. In this way, the “domino-effect” is avoided and execution is predictable in the sense of time and memory requirements.
200
B. Orlic and J.F. Broenink / Interacting Components
Figure 22. Checkpointing
B. Orlic and J.F. Broenink / Interacting Components
201
In this design pattern, the take-over operator, event poisoning, and splitting a process into a normal part and an exception handling part are used as mechanisms to implement checkpointing. In example on Figure 22, two recovery lines are defined, splitting the participating components and contract into two phases. When inside Phase 1 or Phase 2 of the contract an exceptional situation is detected, control is given to the part of that process dedicated to handling exceptions. For every participating component, this part will offer an external choice on all of the EventSync processes belonging to the alphabet of that role. Instead of normal usage, these events will be used to transmit exception information to the components. Thus, the component that was blocked on one of those channels will be released from waiting and it will get a notification of failure of the attempted event. As a consequence, it will go into the part of its process definition that deals with exception handling. Then, it will use a dedicated channel to communicate its status to the contract or more precisely to the EHM part of the contract. The Contract will wait for a predefined period of time to obtain the status information of all involved components. After that, it will perform analysis and establish whether the interaction should be reverted to some recovery line or aborted. Its decision will be communicated to the participating components. 4. Conclusions This paper introduces a component framework for the SystemCSP design methodology. Notion of reusable and formally verifiable interaction contracts as a way to manage interactions in a structured way is introduced. The concept is illustrated by designing reusable interaction contracts for the set of most commonly used fault tolerance design patterns. The design patterns presented here illustrate that SystemCSP is a graphical notation that can provide intuitive and readable modeling in addition to the formal verification capabilities. The design patterns are concerned with often used, but rarely formalized faulttolerance concepts. In that sense, since SystemCSP is directly translatable to CSP, this paper is also a contribution to formalizing those patterns. Another important contribution of this paper is the introduction of a way to build systems around interaction diagrams, but still with a firm formally verifiable relationships preserved across diagrams. A particular component can participate in many different interaction diagrams where in addition to the managed interaction, execution relationships can be specified. This results in an incremental design of execution diagrams throughout the development process, by adding restrictions in different interaction diagrams. All different diagrams are related into single formally verifiable system. In addition to future work specified in the companion paper [1], this paper gives design patterns that need to be implemented and tested in practice on the robotic setups in our lab. References [1] B. Orlic and J. F. Broenink, "SystemCSP - visual notation," presented at CPA, 2006, IOS Press, Amsterdam. [2] "OPC, http://www.opcfoundation.org/." [3] A. F. Zorzo, A. Romanovsky, J. Xu, B. Randell, R. J. Stroud, and I. S. Welch, "Using coordinated atomic actions to design safety-critical systems: a production cell case study," Software: practice & experience, vol. 29, pp. 677, 1999. [4] M. Boosten, "Formal contracts: Enabling component composition," CPA 2003, IOS Press, Amsterdam. [5] R. J. Allen, "A Formal Approach to Software Architecture," vol. PhD: Carnegie Mellon University, 1997.
202
B. Orlic and J.F. Broenink / Interacting Components
[6] A. Beugnard, J-M. Jezequel, N. Plouzeau, and D.Watkins, "Making components contract aware," IEEE Computer, 1999. [7] B. Meyer, " Applying "Design by Contract"," Computer, 1992. [8] P. Nienaltowski, Meyer, B., "Contracts for concurrency," 2006. [9] G. H. Hilderink, "Graphical modelling language for specifying concurrency based on CSP," IEE proceedings. Software, vol. 150, pp. 108, 2003. [10] G. H. Hilderink, "Managing Complexity of Control Software through Concurrency," vol. PhD: University of Twente, 2005. [11] M. ten Berge, B. Orlic, and J. F. Broenink, "Co-Simulation of Networked Embedded Control Systems, a CSP-like process-oriented approach," 2006. [12] D. Henriksson and A. Cervin, "TrueTime 1.13-Reference Manual," Department of Automatic Control, Lund Institute of Technology, Lund, Technical report 2003. [13] A. A. Avizienis, "Basic concepts and taxonomy of dependable and secure computing," IEEE transactions on dependable and secure computing, vol. 1, pp. 11, 2004. [14] L. L. Pullum, Software Fault Tolerance Techniques and Implementation: Artech House, 2001. [15] P. H. Welch, "Graceful termination --- graceful resetting," presented at 10th Occam User Group Technical Meeting, IOS Press, Amsterdam, 1989. [16] D. S. Jovanovic, "Designing dependable process-oriented software, a CSP-based approach," vol. PhD: University of Twente, 2006. [17] G. H. Hilderink, "Exception Handling Mechanism in Communicating Threads for Java," presented at CPA, 2005, IOS Press, Amsterdam.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
203
TCP Input Threading in High Performance Distributed Systems Hans H. HAPPE Department of Mathematics and Computer Science, University of Southern Denmark, DK-5230 Odense M, Denmark
[email protected] Abstract. TCP is the only widely supported protocol for reliable communication. Therefore, TCP is the obvious choice when developing distributed systems that need to work on a wide range of platforms. Also, for this to work a developer has to use the standard TCP interface provided by a given operating system. This work explores various ways to use TCP in high performance distributed systems. More precisely, different ways to use the standard Unix TCP API efficiently are explored, but the findings apply to other operating systems as well. The main focus is how various threading models affect TCP input in a process that has to handle both computation and I/O. The threading models have been evaluated in a cluster of Linux workstations and the results show that a model with one dedicated I/O thread generally is good. It is at most 10% slower than the best model in all tests, while the other models are between 30 to 194% slower in specific tests. Keywords. Distributed systems, HPC, TCP
Introduction The Transmission Control Protocol (TCP) has become the de facto standard for reliable Internet communication. As a result, much work has gone into improving TCP at all levels (hardware, kernel, APIs, etc.). This makes TCP the only viable choice for distributed applications that need to be deployed outside the administrative domain of the deployer. I.e., one might have gained access to a remote cluster, but this does not mean that the administrator is willing to meet specific communication requirements (protocols, operating systems). Grid and peer-to-peer are other examples of environments with this nature. TCP was designed as a reliable connection-oriented client-server protocol. From the users point of view, TCP provides a way to stream bytes between two endpoints. Therefore, the user has to provide a stream encoding mechanism that ensures separation of individual messages (message framing). Encoding and decoding streams make TCP development more complicated and it can lower performance. Message framing and other issues have been addressed in emerging protocols like the Stream Control Transmission Protocol (SCTP) [1] and the Datagram Congestion Control Protocol (DCCP) [2]. These protocols still need to mature and become generally available. This leaves TCP as the only choice. This work describes and evaluates various ways to use TCP communication in high performance distributed systems. Particularly in systems where nodes act as both client and server. In this context a node is a Unix user-process that uses the kernel TCP API for communication. This duality raises the question of threading. If a client is I/O-bound, with regards to communication, it can take on the role as server while waiting. This will avoid the
204
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
services
client
I/O library user process kernel TCP sockets Figure 1. System overview.
overhead of context switching. In other scenarios multiple threads could be a better option. These different threading models are the main focus of this work. The work generally applies to a wide range of distributed systems that are based on TCP communication. Especially systems where nodes concurrently have to handle tasks outside the communication context, might benefit from this work. High performance message passing [3,4] and software-based distributed shared memory [5,6] systems fit into this category. Grid and peer-to-peer based systems could also benefit from this work. 1. System Overview Basically a distributed system consists of multiple interacting nodes. Each node is responsible for a subset of the system and might be a client entry point to the system. In the context of this paper a node is a process in an operating system, which can have multiple threads of execution all sharing its address space. An I/O library will handle TCP communication with other processes. Figure 1 gives a simple overview of the different components in a process. Services handle the distributed system responsibilities of the process. This includes communication, protocols, storage, etc. In some cases it is convenient that clients can become part of the process. In these cases performance and/or simplicity are more important than client separation. 2. Unix TCP Communication The basis for TCP communication in Unix is the socket, which is a general abstraction for all types of network related I/O. As most kernel resources, a socket is referenced from userspace by a file descriptor that is valid until the user explicitly closes the socket. Before actual communication can start a TCP socket has to be connected to the other end. Now data can be streamed between the endpoints by reading and writing to the sockets. 2.1. Sending Sending a message is very simple because the decision to send implies that the content and context of the message is known. Basically the content just needs to be encoded into a message format that can be decoded at the receiver. Then the message can be written to the socket that represents the destination. Writing to a TCP socket will copy the data to an in-kernel TCP buffer, but in case this buffer is full the connection is saturated. Obviously, this could result in deadlocks if not handled carefully. Avoiding deadlocks in distributed systems is a system design issue that
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
205
can not universally be solved by a communication abstraction (I/O library). Features like buffering can help to avoid deadlocks, but in the end it is the system design that should guarantee deadlock-free operation based on these features. 2.2. Receiving It is a general fact in communication that the receiving side is harder to handle. Initially the receiver is notified about pending input, but only after the input is read can its context be determined. I.e., the kernel needs to process IP and TCP headers in order to route input to the correct destination socket. A similar kind of input processing has to be done in user-space in order to route data to the correct subsystem of an application. 2.3. Monitoring Multiple TCP Sockets The fact that each TCP connection is represented by one file descriptor, poses the question of how to monitor multiple connections simultaneously. Having one thread per socket to handle input is a simple way to monitor multiple sockets. This trades thread memory overhead for simplicity. While this memory overhead might be acceptable it can also result in context switching overhead, which in turn pollutes the CPU cache and TLB (multiple stacks). In practice a thread waits for input by doing a blocking read system call on the socket. When the kernel receives data for the socket, it copies it to the buffer provided by the read call and wakes up the thread that now can return to user-space. This ”half” system call (return from kernel) is a short wakeup path and the input data will be available upon return. Most operating systems provide a way for a single thread to monitor multiple sockets simultaneously. Unix systems generally provide the system calls [poll() and select(), but these has scalability issues [7]. Therefore, various other scalable and non-standard methods has been invented. Linux provide the epoll [8] mechanism which is a general way to wait for events from multiple file descriptors. Basically, a thread can wait for multiple events in a single system call (epoll wait()). The call returns with a list of one or more ready events that need to be handled. In the socket input case an event is handled by doing a read on the ready socket. Compared to the multi threaded model described above this is a whole system call per socket in addition to the epoll wait() system call. This overhead should be smaller than the context switching overhead in the multi threaded model for this single threaded method to be an advantage. With a low input rate or single socket activity this will not be the case. 3. Input Models Both services and clients can start large computations as a result of new input. If communication should continue asynchronously during these computations, multiple threads are required. The best way to assign threads depends on the specific distributed system. The focus of this paper is a system where clients has their own thread. The thread might do communication or service work, but only when the client calls into the I/O library or a service. From the client’s point view this is a natural design, because it controls when to interact with the distributed system. Another characteristic is that services are I/O bound. Basically they function as state machines acting on events from the I/O library and/or the client, without doing much computation. Extra threads could be added to handle service computations, but this will not be addressed in this paper. Figure 2 illustrates common input cases. Case a) is input directed to a service or a client without producing new output (response/forward). This can be handled in the context of the
206
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
I/O
service
client
a)
b)
Figure 2. Input scenarios. Dashed arrows indicate events that might follow.
client thread when it is ready, because the input event does not affect other parts of the system. In case b) a service produces new output as a result of the input. This output could be a response to a request or some sort of forwarding and might be important for other nodes. The input should therefore be handled as soon as possible and not have to wait for the client to be available for communication. This requires at least one extra thread for input handling. The following sections describe the threading models that will be evaluated in section 4. Only the input path is described, while details about sending and setting up connections are left out. 3.1. Model 1: Single Thread In this model I/O and service processing are only handled when the client thread calls into these. When such a call can not be served locally the client thread will be directed to the I/O library in order to handle new input. When input arrives it will be handled and in case it matches the requirements of the client, control is returned to the client (Figure 3). Case a) is handled perfectly because context switches are avoided when input for the client arrives. However, in case b) progress in the overall system can be stalled if the client is CPU-bound. Also, this model requires that multiple sockets can be monitored simultaneously as described in section 2.3. client
services
I/O wait for input
service input
Figure 3. Single thread model.
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
client
services
207
I/O wait for input
service input
Figure 4. Input thread model. The white thread is the input thread.
3.2. Model 2: Input Thread In this model a thread is used for input handling. The thread starts in the I/O library and when input arrives it delivers this to one of the services or the client (Figure 4). The exact details of how this multiplexing is done is not important in this context. In case the client is waiting for input the input thread must wake up the client when this input arrives. This adds context switching overhead, but solves the issues with input case b). Again, this model requires that multiple sockets can be monitored simultaneously. 3.3. Model 3: A Thread per Socket This model works similarly to M2 except that each socket has a dedicated input thread. This removes the overhead of monitoring multiple sockets, but also introduces new issues as described in section 2.3. 3.4. Models 1 and 2: Hybrid Given the cons and pros of the described models a hybrid between M1 and M2 would be interesting. The idea is to make the client handle I/O events while it is otherwise waiting for input. This requires a way to stop and restart the input thread by request from the client thread. While this is possible, it can not be done in a general way without producing context switches. The problem is that the input thread has to exit and reenter the event monitoring system call. New kernel functionality is needed in order for this model to work and it will therefore not be evaluated in this paper. 4. Evaluation The evaluation was done with a software-based distributed shared memory system, which currently is work in progress. It is based on the PastSet memory model [6] and fits into the system model described in section 1. The memory subsystem is implemented as services and applications act as clients using these services. The results show the performance of the various models for this specific distributed system. Performance variations in these results have not been examined in close detail, but some hints to why models perform differently are given in the description. Low-level information about cache misses and context switches would be interesting if the goal was to improve operating systems, but this work targets the use of generally available communication methods.
208
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
4.1. Application A special evaluation application that can simulate different computation and communication loads has been developed. Basically each process runs a number of iterations that have a communication and a computation part. How these parts work in each run are specified by load-time parameters. In the communication part one process writes some data to shared memory and all others read this data (like multicasting). The writer in each iteration is chosen in a round-robin fashion. A parameter comm defines how much data is written in each iteration. The computation part does a series of local one byte read/update calculations. A parameter mem defines how much memory is touched by these calculations and is therefore the minimum number of read/updates done in each iteration. This makes it possible to test the effect of memory use in computations. Another parameter calc defines a maximum number of read/updates that will be done, but the actual number of read/updates carried out in each iteration is chosen randomly from the range [mem; calc]. This makes computations uneven and ensures that processes are not in sync. A pseudo-random number generator is used to ensure comparability and the generator is initialized with different seeds on for each process. 4.2. Test Platform The evaluation was performed on a 32 node Linux cluster interconnected by Gigabit Ethernet. Each node had an Intel Pentium 4 541 64-bit CPU with Hyper-Threading and ran version 2.4.21 of the Linux kernel. The kernel supported the new Native POSIX Threading Library (NPTL) [9] and was used in the evaluation. Hyper-Threading was turned off so that the multi threaded models did not get an advantage. This was done by forcing processes to stay on one CPU with the taskset(1) tool. The less scalable poll() system call was used to monitor multiple sockets, because the kernel did not support epoll (available in versions 2.6.x). This could have a negative effect on the performance of the M1 and M2 models. 4.3. Results The tests were done by running a series of communicate/compute iterations (see section 4.1) and hereby measure the average iteration time. Each test were run three times to test for variations between runs. The variations were insignificant and therefore the average of these three runs are used in the following results. 4.3.1. I/O-bound Figures 5 and 6 show the scalability of the different models without computation. M1 performs better than the other two threaded models, as expected. It is only marginally better than M2 and the difference is not even visible when communication increases (Figure 6). Therefore, M2 only imposes a small context switching overhead compared to M1. The many threads in M3 give even more overhead as the number of nodes increase (Figure 5). With added communication and therefore higher memory utilization the overhead of threading really decreases performance (Figure 6). With 32 nodes M3 almost triples the completion time compared to the other models. These observations indicate that thread memory overhead (stacks and task descriptors) is the cause of M3’s performance issue. When plotting time as a function of communication load, the poor performance of M3 becomes even more apparent (Figure 7). M1 and M2 do equally well, while there is an anomaly starting at 16KB. M3 does not have this anomaly which indicates that the monitoring of multiple sockets (poll()) is the cause. Longer I/O-burst increases the chance that there
209
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
mem=1B, comm=32B, calc=1 0.001 0.0009
M1 M2 M3
0.0008 0.0007
time (s)
0.0006 0.0005 0.0004 0.0003 0.0002 0.0001 0 2
4
8
16
32
nodes
Figure 5. Scalability with 32B read/write and no computation.
mem=1B, comm=16KB, calc=1 0.0045 0.004
M1 M2 M3
0.0035
time (s)
0.003 0.0025 0.002 0.0015 0.001 0.0005 0 2
4
8
16
32
nodes
Figure 6. Scalability with 16KB read/write and no computation.
are multiple sockets with input when poll() is called. This reduces the number of calls and therefore the total overhead of poll().
210
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
32 nodes, mem=1B, calc=1 0.012 M1 M2 M3 0.01
time (s)
0.008
0.006
0.004
0.002
0 32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
comm (B)
Figure 7. Different read/write sizes and no computation. 32 nodes, mem=1B, comm=32B 0.07 M1 M2 M3 0.06
time (s)
0.05
0.04
0.03
0.02
0.01
0 10000
100000
1e+06
2e+06
calc iterations
Figure 8. Different computation loads and 32B read/write.
4.3.2. CPU-bound Figures 8 and 9 show how the models perform with different levels of computation in the clients. Remember that the actual number of iterations is random, but the displayed values are maximums. As expected M1 does not perform well when computation is increased, while M3 becomes the best model. The long computation periods spread communication events in
211
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
32 nodes, mem=1B, comm=16KB 0.07 M1 M2 M3 0.06
time (s)
0.05
0.04
0.03
0.02
0.01
0 10000
100000
1e+06
2e+06
calc iterations
Figure 9. Different computation loads and 16KB read/write.
32 nodes, comm=32B, calc=1 0.11 0.1
M1 M2 M3
0.09 0.08
time (s)
0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 64
128
256
512
1024
2048
memory (KB)
Figure 10. Client memory utilization test with 32B read/write.
time. The short wakeup path in M3 benefits from this, while the overhead of poll() is not amortized by handling multiple events per call. Consequently, M2 is slower than M3, but this might be resolved by using a more scalable monitoring method such as epoll. When the computations touch memory M2 wins, while M3 now becomes second best (Figure 10). This is presumed to be caused by the larger working set of M3.
212
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
5. Related Work Much work addresses TCP kernel interfaces [10,8,11,7] and revolves around monitoring multiple sockets. The general conclusion is that the performance of event-based interfaces are superior to threading. For CPU-bound workloads threading is needed, though. This is in line with the findings of this paper, because the best overall model (M2) combines event-based I/O and threads. TCP communication latency hiding by overlapping communication with computations is explored in [12,13]. While the advantages of this overlapping are clear the evaluation is very limited. Only two nodes is used and the applications have well defined I/O and CPU-bursts. In [14] an MPI [3] implementation that uses separate communication and computation threads is compared with a single-threaded implementation. These implementations correspond to models M2 and M1 respectively and the results are similar. 6. Conclusions Various ways to handle TCP input in high performance distributed systems have been evaluated. This was done for a specific case where nodes act as both client and server. In this context a node is a Unix user-process that uses the kernel TCP API for communication. Three input models with different ways of using threads were evaluated. The exact details of these models can be found in section 3, but this list gives a short summary: M1: A single thread handling all work. M2: A dedicated thread handling all TCP input and a client thread. M3: A thread per TCP socket and a client thread. The overall winner of the three models is M2. In cases where M1 or M3 are better, M2 is 10% slower at most. M1 wins in I/O-bound tests, because the single thread in this case only has to handle input events. On the other hand, it is the worst model in CPU-bound tests. M3 only wins in CPU-bound tests with low memory utilization. The short input wakeup path and large memory working set (thread state) of M3, is believed to be the reason for its effectiveness in this special case. M1 and M2 were implemented using the poll() system call for socket event monitoring. More scalable methods such as epoll were not available on the test platform. Using such methods should shorten the wakeup path in these models. A hybrid between M1 and M2 would be an interesting subject of further research. When the client is waiting for input it might as well handle input events. At the time the input it is waiting for becomes available, it can start using it immediately without doing a context switch. References [1] J. Yoakum L. Ong. RFC 3286: An Introduction to the Stream Control Transmission Protocol (SCTP), 2002. [2] S. Floyd E. Kohler, M. Handley. RFC 4340: Datagram Congestion Control Protocol (DCCP), 2006. [3] Message Passing Interface Forum. MPI: A message-passing interface standard. Technical Report UT-CS94-230, 1994. [4] V. S. Sunderam. PVM: a framework for parallel distributed computing. Concurrency, Practice and Experience, 2(4):315–340, 1990. [5] David Gelernter. Generative communication in linda. ACM Transactions on Programming Languages and Systems (TOPLAS), 7(1):80–112, 1985. [6] Brian Vinter. PastSet: A Structured Distributed Shared Memory System. PhD thesis, Department of Computer Science, Faculty of Science, University of Troms, Norway, 1999.
H.H. Happe / TCP Input Threading in High Performance Distributed Systems
213
[7] Dan Kegel. The C10K problem, 2004. http://www.kegel.com/c10k.html. [8] L. Gammo, T. Brecht, A. Shukla, and D. Pariag. Comparing and evaluating epoll, select, and poll event mechanisms. Proceedings of 6th Annual Linux Symposium, 2004. [9] U. Drepper and I. Molnar. The Native POSIX Thread Library for Linux. White Paper, Red Hat, Fevereiro de, 2003. [10] J. Ousterhout. Why threads are a bad idea (for most purposes). Presentation given at the 1996 Usenix Annual Technical Conference, January, 1996. [11] J. Lemon. Kqueue: A generic and scalable event notification facility. Proceedings of the USENIX Annual Technical Conference, FREENIX Track, 2001. [12] Volker Strumpen and Thomas L. Casavant. Implementing communication latency hiding in high-latency computer networks. In HPCN Europe, pages 86–93, 1995. [13] Volker Strumpen. Software-based communication latency hiding for commodity workstation networks. In ICPP, Vol. 1, pages 146–153, 1996. [14] S. Majumder, S. Rixner, and V.S. Pai. An Event-driven Architecture for MPI Libraries. Proceedings of the Los ALamos Computer Science Institute Symposium (LACSI’04),October, 2004.
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
215
A Cell Transterpreter Damian J. DIMMICH, Christian L. JACOBSEN and Matthew C. JADUD Computing Laboratory, University of Kent, Canterbury, CT2 7NF, UK {djd20 , clj3 , mcj4} @kent.ac.uk Abstract. The Cell Broadband Engine is a hybrid processor which consists of a PowerPC core and eight vector co-processors on a single die. Its unique design poses a number of language design and implementation challenges. To begin exploring these challenges, we have ported the Transterpreter to the Cell Broadband Engine. The Transterpreter is a small, portable runtime for concurrent languages and can be used as a platform for experimenting with language concepts. This paper describes a preliminary attempt at porting the Transterpreter runtime to the Cell Broadband Engine and explores ways to program it using a concurrent language. Keywords. CELL processor, Transterpreter, Portable run-time environments
Introduction Multi-core processors are becoming commonplace in desktop computers, laptops and games consoles [1,2,3]. Traditionally programming such concurrent systems has been considered difficult [4,5]. We believe that creating and designing software that makes use of concurrency provided by hardware can be easy, given a language and runtime that provide support for concurrency. With a compiler that can check for certain errors in concurrent code, and a runtime that gracefully reports error conditions surrounding program deadlock, developing such software can become simpler. The Sony PlayStation III [6], a consumer games console which will be released at the end of 2006, is an example of a readily available and affordable multi-core system. At its heart, the PlayStation III will be powered by the Cell Broadband Engine [7], commonly referred to as the Cell processor. The Cell, as seen in Figure 1, has a complicated architecture consisting of a PowerPC (PPC) core surrounded by eight vector processors, called Synergistic Processing Units [8] (SPUs). These nine processors are connected by a high-speed bus that provides fast inter-processor communication and access to system memory. The Transterpreter [9,10] is a small and highly portable runtime for concurrent language research and is used as the platform for the implementation described. The Cell edition of the Transterpreter aims to address architectural issues that need to be overcome for the runtime to function in a useful manner. occam-π, a language supported on the Transterpreter, has built in semantics for concurrency, and provides a compiler which supports the programmer in developing safe concurrent software [11]. occam-π provides language level facilities for safe interprocess communication and synchronisation, and is derived from a formal model of concurrency that can be used to reason about programs [12,13]. The long-term goals of this project are to address the difficulties of programming the Cell processor and other concurrent systems which require a more involved design, such as a pipeline of processes or mulitple instruction multiple data (MIMD) type problems, and cannot be effectivley parallelised through the use of preprocessor directives and loop level concurrency. This paper describes the implementation of a prototype concurrent runtime for the Cell processor, a first attempt at reaching our goals.
216
D.J. Dimmich et al. / A Cell Transterpreter
We begin this paper with an overview of the Cell processor’s architecture, and the steps taken to port the Transterpreter to the Cell. We then present an overview of what programming the Cell using occam-π could involve, and close with a discussion of future work. 1. An Overview of the Cell Broadband Engine The Cell Broadband Engine consists of a PowerPC core that is connected to eight vector processing units which are connected via a high speed bus. This architecture provides a number of challenges for language designers and programmers. While it was not possible to purchase a Cell processor at this time of writing, a cycle accurate simulator for the Cell was available [14]. The simulator, available for Linux, lets programmers begin developing software for the Cell Broadband Engine. What follows presents background information on the Cell architecture which should help clarify some of the implementation details given later. This section closes with an overview of the challenges that this architecture presents and discusses IBM’s attempt at addressing them. 1.1. PowerPC Core (PPC) The PowerPC core was designed to be binary compatible with existing PowerPC based processors, allowing the execution of pre-existing binaries on the Cell’s PPC core without modification. The core has built-in hardware support for simultaneous multithreading, allowing two threads to execute in parallel on the processor. An AltiVec [15] Single-InstructionMultiple-Data (SIMD) vector-processing unit is also present.
Figure 1. A diagram of the Cell BE.
1.2. Synergistic Processing Units (SPU) The SPU processors are dedicated vector processing units. Each SPU is equipped with a 256KB local store for program and data, and each unit has a dedicated memory controller attached to it. The memory controller, and hence memory access, is programmed explicitly from the SPU to manage data flow between system memory and its local store. The memory
D.J. Dimmich et al. / A Cell Transterpreter
217
controllers are able to move chunks of data up to 16KB in size to and from the SPU units’ local stores without interrupting computation. The memory controllers also coordinate most of the inter-processor synchronisation. Synchronisation and communication are achieved by reading from and writing to specialpurpose memory-mapped registers designated for this task. Each SPU is equipped with a set of three 32-bit mailboxes (registers) for communication/synchronisation with the PPC. Two are outbound, blocking mailboxes, one of which interrupts the PPC when a message is sent. The third, inbound mailbox, is for receiving messages from the PPC. Each SPU is also equipped with two inbound 32-bit registers, called SigNotify1 and SigNotify2, which any SPU or PPC can write to, with either overwriting or blocking behaviours. 1.3. The Cell’s Element Interconnect Bus A key component of the Cell Processor is the high-speed Element Interconnect Bus (EIB) through which all of the processing units and main memory are connected. The EIB is an on-chip bus which allows all the processing units to communicate with each other directly, without requiring access to main memory. A diagram of the Cell processor and how the Cell interconnects all of the elements can be seen in Figure 1 on the facing page. 1.4. The Cell’s Challenges The Cell processor contains two different processor types, and nine independent processor segments. This means that programs wishing to exploit all processors have to be effectively written as two or more separate programs, one of which is compiled for and executes on the PPC, and another which is compiled for and executes on the SPU. These separate programs have to be able to synchronise and share data. This and the need to manage the SPU memories explicitly make programming for the Cell considerably more difficult than programming for traditional processors. IBM, being aware of these difficulties has chosen to address them by developing an auto-parallelising compiler [16] for the Cell processor. It attempts to address these issues by relying on the concurrency inherent in loops, and by using preprocessor directives [17] that instruct the compiler about transformations that are safe in order to create parallel code. Such auto-parallelisation provides a fast mechanism for making use of the additional processor power available on a concurrent system without the need for rewriting existing code. Unfortunately not all code can be modified or annotated easily to gain performance advantages from auto-parallelising compilers. An alternative method to automatic parallelisation could provide the ability to make use of the Cell without requiring expert programming. 2. The Transterpreter on the Cell Broadband Engine The Transterpreter is a highly portable interpreter for concurrent languages. It provides a means for running a concurrent language on a new platform in a short amount of time, circumventing the need to write a new compiler backend for a given language. With the port of the Transterpreter to the Cell, we hope to explore how we can use occam-π, a language designed with concurrency in mind, in the context of the Cell processor and gain understanding of what would be required in order to port the language to the Cell processor. The core of the Transterpreter is portable across platforms because it has no external dependencies, and it builds using any ANSI compliant C compiler. For the Transterpreter runtime to be useful on a given architecture, a platform specific wrapper needs to be written which provides an interface to the underlying hardware. In the case of the Cell two separate wrappers where needed, one for the PPC core, and one for the SPUs.
218
D.J. Dimmich et al. / A Cell Transterpreter
Porting the Transterpreter to the Cell required careful thought regarding occam-π channels and scheduling. One of the things that had to be taken into account when writing the wrappers was the mapping of occam-π channels to the Cell’s communication hardware. This was difficult because the underlying hardware does not map directly to the semantics of CSP/occam-π channels. Furthermore, the Transterpreter’s scheduler needed to be extended to support the handling of interrupts generated by the Cell. 2.1. Program Distribution The Transterpreter for the Cell consists of a PPC executable which is programmed to load and start SPU Transterpreter executables. Once the PPC and the SPUs are all running, the program bytecode designated for execution is loaded into main memory. A pointer to the program bytecode is passed to all the SPUs which then copy the program bytecode into their local stores from system memory. Once the copy is complete all the Transterpreters begin executing the bytecode. Currently all Transterpreter instances receive the same copy of the bytecode. In order for program flow to differ on processors, a program must query the Transterpreter about it’s location in order to determine what to do. A program’s location can be determined by the unique CPU ID, a number from 0 to 9, that each Transterpreter instance gets assigned at startup. 2.2. Inter-Processor Communication occam-π channels are unidirectional, blocking, and point-to-point. The individual SPUs of a Cell processor are not so limited in their communications; therefore, both the compiler and the wrappers must provide support for making our mapping from occam-π to the Cell hardware consistent and safe. The blocking nature of channel communications provides explicit synchronisation points in a program. While the compiler provides checks for correct directional usage of channels when compiling, the Transterpreter wrappers must ensure such that that channel communications between processors are blocking and unbuffered. 2.2.1. SPU to PPC Communication The SPU-to-PPC mailbox registers are word-sized (32-bit), unidirectional non-overwriting buffers. When empty, a mailbox can be written to, and execution can continue without waiting for the mailbox to be read. When a mailbox is read, it is emptied automatically. When a mailbox is full, the writing process will stall until the previous message has been read. The SPU can receive interrupts when one of its mailboxes is read from, or written to. The mailbox registers are used to implement channel communications in occam-π between the PPC and the SPU. In order to preserve the channel semantics of occam-π, a writing process is taken off the run queue and set to wait for the “mailbox outbound read” interrupt to occur. The communication only completes when the mailbox is read by the PPC. The SPU is able to quickly pass multi-word messages by continuously writing to the mailbox while the PPC continuously reads. The PPC does not receive interrupts when its outbound message is read and must poll the mailbox to check if it has been read. 2.2.2. Inter-SPU Communication For SPU-to-SPU communications, two inbound registers SigNotify1 and SigNotify2, are provided on each SPU. The registers are programatically configured by the Transterpreter to be non-overwriting and, like the mailbox registers, can only be cleared by a read. The Transterpreter provides facilities for sending both large messages and short, word-size messages between SPUs. On Transterpreter startup, space is reserved in main memory for the sending of large messages. Eight 16KB chunks of memory are allocated for each SPU to receive data in.
D.J. Dimmich et al. / A Cell Transterpreter
219
Each SPU then receives a list of pointers to the memory locations that they can write to. For example, SPU 2 will receive pointers to the second 16KB chunk of memory of each SPU’s receiving memory. This ensures that when two SPUs are sending data to a third SPU, no portion of memory is overwritten accidentally. When a write to another SPU is initiated, the data to be sent is copied into the appropriate 16KB chunk of memory. Once the copy completes, the writer SPU puts its CPU ID into SigNotify1 on the destination SPU. The writer process is then moved to the interrupt queue in the scheduler and waits for confirmation of the read. Once SigNotify1 has been written to on the reader SPU, an interrupt is generated on the it, informing the SPU about the write. If a process is waiting on the interrupt queue awaiting data, it copies the message in main memory into its local store, using the ‘read-only’ pointer that it was provided with at startup. Once the copy is completed the reader SPU acknowledges completion of the read by putting a flag value into SigNotify1 on the writer SPU to confirm that the read has been completed. At this point the reading SPU process can be taken off the interrupt queue and can continue execution. The writer SPU process which had been waiting on the interrupt queue checks that it has received a read confirmation from the reader SPU, and can continue executing once it is rescheduled. Alternatively, for short messages, or synchronisation it is possible to send word-sized messages between SPUs. This capability is useful since it allows for communication and synchronisation without taxing the memory bus. In this case the SigNotify2 register is used for sending data and SigNotify1 for the sender’s CPU ID. In order to determine the length and type of message being sent, the program must be written such that each SPU may only send one message at a time to another SPU. Furthermore, the reader and the writer have to agree on the size of the message at compile-time. These factors determine if a memory copy or the SigNotify2 register is used for sending data.
2.3. Scheduling
The Transterpreter uses a co-operative scheduler with three queues: the run queue, the timer queue and the interrupt queue. Process rescheduling only occurs at well defined points during execution. This implies that the interrupt queue is only checked when the scheduler is active. If interrupts are ready, processes waiting on them are moved from the interrupt queue to the back of the run queue. This behaviour strongly encourages programmers to make use of occam-π’s concurrency features so as not have a processor stalling whenever a process needs to wait on an interrupt. occam-π processes have a very low overhead in terms of memory usage and context switching. This allows a programmer to develop programs with many concurrent processes executing that will make use of the processor while some processes are waiting on external communication to complete. Should a programmer wish to write programs using a more sequential paradigm, buffer processes could be used to handle communications. Figure 2 illustrates how a writing buffer process can be used when communicating with another processor. This way, only the buffer process stalls while waiting for communication to complete, allowing other processes to continue executing. Similarly, a dedicated reading buffer process can ensure that data is read in as soon as it is available reducing potential stalls in the network and keeping all the processors busy. This can be made particularly effective by running the buffer processes at higher priority than other processes.
220
D.J. Dimmich et al. / A Cell Transterpreter
Figure 2. The read and write buffers allow computation to continue with less interruption.
3. Programming the Cell Using the Transterpreter The Transterpreters that are executing on the SPUs are each assigned a unique CPU ID. A native function call mechanism in occam-π allows the programmer to call C functions that are a part of the Transterpreter. Using the native call TVM.get.cpu.id, the program can determine on which processor it is executing. A CPU ID value of 0 is returned if the bytecode is running on the PPC, and a value between 1 and 8 if it is on one of the SPUs. An example of how an occam-π startup process on the Cell could be written is shown in Listing 1. PROC startup(CHAN BYTE INT id: SEQ TVM.get.cpu.id(id) IF id = 0 ... −− execute id > 0 ... −− execute :
kyb, err, src) −−check where we are running
PPC code. SPU code.
Listing 1. An example of a startup process on the Cell Transterpreter.
In order to send and receive messages between processors, the native functions TVM.read.mbox and TVM.write.mbox are provided. These native functions behave similarly
to occam-π channels in that they block until the communication has completed. An example of their use is shown below in a program where nine Transterpreters are running concurrently and are connected in a ring. All the processes in the pipeline do is increment a value as it propagates through. The result of the incrementing is printed each time it comes back to the process that originated the value. The process run.on.spu in Listing 2 reads values from the previous processor in the pipeline. The value is then incremented and sent on to the next processor. Because of the “\” - the modulo operator, when the process is running on the processor with CPU ID 8, it sends the value back to the PPC who’s CPU ID is 0. VAL INT stop IS 95: PROC run.on.spu(VAL INT cpuid) INITIAL INT value IS 0: INT status: WHILE value < stop SEQ TVM.read.mbox((cpuid − 1), value, status) value := value + 1 TVM.write.mbox((cpuid + 1) \ 8, value) : Listing 2. A process running on an SPU.
D.J. Dimmich et al. / A Cell Transterpreter
221
The process run.on.ppu in Listing 3 is executed only on the PPC core. The process header denotes that a CHAN BYTE must be passed to it as a parameter. This is a channel of type BYTE, the equivalent of a char in C. In this program it is connected to the screen channel that is used as a method of output. The “!” is used to denote a write to a channel, where the RHS contains the value to be written, and the LHS the name of the channel to write to. It starts propagating a value down the pipeline by writing to the first SPU. It waits for the value to complete going through the pipeline and it outputs the modified value, followed by a return character by writing to the scr channel. PROC run.on.ppc(CHAN BYTE scr!) INITIAL INT value IS 65: INT status: WHILE value < stop SEQ TVM.write.mbox(1, value) TVM.read.mbox(8, value, status) scr ! (BYTE value) scr ! ’∗n’ : Listing 3. The process which runs on the PPC and outputs data to the screen.
Listing 4 shows the startup process, startup. This gets the CPU ID using the TVM.get.cpu.id function and runs the appropriate process depending on the value of cpuid. In this process, the kyb, scr and err channels are obtained ‘magically’ by the starting process much like command line parameters are obtained in C’s main function. When the run.on.ppu process is started, the scr channel is passed to it as a parameter so that it can output to the screen. The run.on.spu process receives the CPU ID as a parameter. PROC startup(CHAN BYTE kyb?, scr!, err!) INT cpuid: SEQ TVM.get.cpu.id(cpuid) IF cpuid = 0 run.on.ppu(scr!) cpuid > 0 run.on.spu(cpuid) : Listing 4. The startup process which starts the correct process depending where the code is running.
In future, a library can be created which will wrap around the native TVM calls to provide a channel interface for communicating between processes. This is desirable because it reflects the more common occam-π programming methodology, where processes communicate through channels to achieve synchronisation and send data. 4. Future Work In the simplest case there are obvious improvements that can be made to the Transterpreter for running occam-π programs on the Cell. The most obvious are infrastructure modifications that will allow for using the channel abstraction for communication between processors. Further improvements could leverage the vector processing capabilities of the Cell BE architecture more effectively. Finally, we would like to explore code generation possibilities using the Transterpreter.
222
D.J. Dimmich et al. / A Cell Transterpreter
4.1. Infrastructure The current implementation of the Transterpreter on the Cell does not provide a means for channel communications to occur between processors. Currently, sending data using the TVM.write.mbox in parallel would result in unpredictable behaviour. Direct support for channel communications would allow compile-time checking which could ensure a degree of safety. Furthermore a method for abstracting the channel connections between processors, akin to pony’s [18] virtual channels would allowing for multiple channels to exist between processors. This would enable a much more flexible mode of programming by multiplexing channels automatically. 4.2. Vector processing While C does not have good concurrency primitives, some implementations have extended the language to include vector processing functionality. A C-based vector processing library could be embedded in the SPU wrappers for the Transterpreter, and an interface to it could be created using the occam-π extension for SWIG [19], a foreign-function interface generator. The occam-π language could also be extended to support vector primitives using the languages’ support for operator overloading [20]. To support vector primitives at a bytecode level, a compiler that provided a means for easily adding new syntax would be needed. The Transterpreter bytecode would need to be extended and the runtime modified accordingly to allow the updated bytecode to function. 4.3. Code Generation While being able to run occam-π and the Transterpreter on a platform like the Cell is interesting [21], it is impractical. Even though the runtime has a small memory footprint in comparison with other virtual machines, the limited amount of local store available on the SPUs becomes even smaller when it needs to be shared by bytecode and the Transterpreter. Additionally, the current implementation of the Transterpreter replicates the same bytecode to each processor, meaning that unused code is being carried around, wasting further resources. Furthermore, the overhead of running a bytecode interpreter is particularly large since the SPU is only capable of 128-bit-sized loads and stores, and a penalty is paid on all scalar operations due to the masking and unmasking necessary for processing 32-bit values. Because of the high memory consumption, and the overhead caused by bytecode interpretation, occam-π and the Transterpreter become unattractive to developers who need the specialised processing power that the Cell offers. In order to address these issues, we have begun exploring the idea of generating native code from occam-π [22] using parts of the Transterpreter runtime and gcc [23]. Such a solution will allow us to combine the speed associated with C, and the safe concurrency that occam-π have to offer. A manner of either automatically inferring, or specifying through annotation, which parts of the code are required on which processor would allow for dead code elimination and hence, smaller binaries on the SPUs. The current implementation of the Transterpreter for the Cell provides a basis for code generation for the Cell. It has shown that a language like occam-π is usable on a platform such as the Cell. The lessons learned, and parts of the code written, during the implementation of the Cell Transterpreter can be reused in the generation of C from occam-π for the Cell.
D.J. Dimmich et al. / A Cell Transterpreter
223
5. Conclusions Our long term goals aim to address the difficulties of programming the Cell processor and other concurrent systems which cannot be parallelised through the use of preprocessor directives and loop level concurrency. We want to establish what a modern, highly concurrent language must provide to be an effective tool for solving modern day computational problems. Using the Transterpreter and occam-π as our starting point, we plan to extend the language to provide direct support for modern hardware. The prototype implementation of the Transterpreter for the Cell has shown that it is possible to use this type of a runtime on an architecture like the Cell and it is a first step in achieving our future goals.
References [1] Justin Rattner. Multi-core to the masses. In PACT ’05: Proceedings of the 14th International Conference on Parallel Architectures and Compilation Techniques (PACT’05), page 3, Washington, DC, USA, 2005. IEEE Computer Society. [2] Shekhar Y. Borkar, Pradeep Dubey, Kevin C. Kahn, David J. Kuck, Hans Mulder, Stephen S. Pawlowski, R processor and platform evolution for the next decade. and Justin R. Rattner. Platform 2015: Intel Technical report, Intel Corporation, 2005. [3] Poonacha Kongetira, Kathirgamar Aingaran, and Kunle Olukotun. Niagara: A 32-way multithreaded sparc processor. IEEE Micro, 25(2):21–29, 2005. [4] Edward A. Lee. The problem with threads. Computer, 39(5):33–42, 2006. [5] Hans-J. Boehm. Threads cannot be implemented as a library. In PLDI ’05: Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pages 261–268, New York, NY, USA, 2005. ACM Press. [6] Sony. Playstation III, 2006. http://www.us.playstation.com/PS3. [7] D. Pham et al. The Design and Implementation of a First-Generation CELL Processor. Digest of Technical Papers, pages 184–185, February 2005. [8] B. Flachs et al. A Streaming Processing Unit for a CELL Processor. Technical report, ISSCC - DIGEST OF TECHNICAL PAPERS, SESSION 7 / MULTIMEDIA PROCESSING / 7.4, 2005. [9] Christian L. Jacobsen and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Dr. Ian R. East, Prof David Duce, Dr Mark Green, Jeremy M. R. Martin, and Prof. Peter H. Welch, editors, Communicating Process Architectures 2004, volume 62 of Concurrent Systems Engineering Series, pages 99–106. IOS Press, Amsterdam, September 2004. [10] Christian L. Jacobsen and Matthew C. Jadud. Towards concrete concurrency: occam-pi on the LEGO mindstorms. In SIGCSE ’05: Proceedings of the 36th SIGCSE technical symposium on Computer science education, pages 431–435, New York, NY, USA, 2005. ACM Press. [11] F.R.M. Barnes and P.H. Welch. Communicating Mobile Processes. In I. East, J. Martin, P. Welch, D. Duce, and M. Green, editors, Communicating Process Architectures 2004, volume 62 of WoTUG-27, Concurrent Systems Engineering, ISSN 1383-7575, pages 201–218, Amsterdam, The Netherlands, September 2004. IOS Press. ISBN: 1-58603-458-8. [12] C. A. R. Hoare. Communicating Sequential Processes. Prentice-Hall, 1985. [13] R. Milner, J. Parrow, and D. Walker. A Calculus of Mobile Processes – parts I and II. Journal of Information and Computation, 100:1–77, 1992. Available as technical report: ECS-LFCS-89-85/86, University of Edinburgh, UK. [14] IBM Full-System Simulator for the Cell BE, 2006. http://www.alphaworks.ibm.com/tech/cellsystemsim. [15] Keith Diefendorff, Pradeep K. Dubey, Ron Hochsprung, and Hunter Scales. AltiVec Extension to PowerPC Accelerates Media Processing. IEEE Micro, 20(2):85–95, 2000. [16] A. E. Eichenberger et al. Using advanced compiler technology to exploit the performance of the Cell Broadband Engine achitecture. IBM SYSTEMS JOURNAL, 45:59–84, January 2006. [17] Leonardo Dagum and Ramesh Menon. OpenMP: An Industry-Standard API for Shared-Memory Programming. IEEE Comput. Sci. Eng., 5(1):46–55, 1998. [18] Mario Schweigler, Fred Barnes, and Peter Welch. Flexible, Transparent and Dynamic occam Networking with KRoC.net. In Jan F Broenink and Gerald H Hilderink, editors, Communicating Process Architectures 2003, volume 61 of Concurrent Systems Engineering Series, pages 199–224, Amsterdam, The Netherlands, September 2003. IOS Press.
224
D.J. Dimmich et al. / A Cell Transterpreter
[19] Damian J. Dimmich and Christan L. Jacobsen. A Foreign Function Interface Generator for occam-pi. In J. Broenink, H. Roebbers, J. Sunter, P. Welch, and D. Wood, editors, Communicating Process Architectures 2005, pages 235–248, Amsterdam, The Netherlands, September 2005. IOS Press. [20] D.C.Wood and J.Moores. User-Defined Data Types and Operators in occam. In B.M.Cook, editor, Architectures, Languages and Techniques for Concurrent Systems, volume 57 of Concurrent Systems Engineering Series, pages 121–146, Amsterdam, the Netherlands, April 1999. WoTUG, IOS Press. [21] Occam-com mailing list. http://www.occam-pi.org/list-archives/occam-com/. [22] Christian L. Jacobsen, Damian J. Dimmich, and Matthew C.Jadud. Native code generation using the transterpreter. In Peter Welch, Jon Kerridge, and Fred Barnes, editors, Communicating Process Architectures 2006, Concurrent Systems Engineering Series, Computing Laboratory, University of Kent,Canterbury, CT2 7NZ, England., September 2006. IOS Press, Amsterdam. [23] GCC Team. Gnu compiler collection homepage. http://gcc.gnu.org/, 2006.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
225
Mobile Robot Control The Subsumption Architecture and occam-pi Jonathan SIMPSON, Christian L. JACOBSEN and Matthew C. JADUD Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NZ, England {js219 , clj3 , mcj4} @kent.ac.uk Abstract. Brooks’ subsumption architecture is a design paradigm for mobile robot control that emphasises re-use of modules, decentralisation and concurrent, communicating processes. Through the use of occam-pi the subsumption architecture can be put to use on general purpose modern robotics hardware, providing a clean and robust development approach for the creation of robot control systems. Keywords. Mobile robots, Robot control, Subsumption architecture, occam-pi
Introduction Robotic control can be seen as a mixture of engineering and cognitive science and as such it presents unusual challenges to the programmer. Robotic control methodologies have tended to move from simplistic, predefined actuator actions based on specific input criteria to tight feedback loops with input from the environment, giving more robust solutions. In these environments, where many such control loops are required, the opportunities for the application of parallel programming to create simple and robust solutions are numerous. Continuous, concurrently running processes are critical to robotics, as a robot typically has a number of inputs and outputs that must be handled simultaneously. If we wish to keep such a robot from running into walls, at least one process must continuously monitor the space between the robot and nearby objects using some sort of range-finder. Whilst this specific behaviour may be important, we cannot focus solely on one sensor at the exclusion of all other behaviours the robot may be designed to perform. Even simple robots can have many different tasks to do simultaneously. For example, a robot might try to avoid bumping into walls whilst also trying to create the largest treacle pudding in the world. The latter task is the main purpose of the robot, but the first is important for the robot to meet its goal as designed and needs to be handled constantly alongside the robot’s main task. 1. Traditional Approaches to Robotic Control 1.1. The Hierarchical Paradigm A hierarchical approach to robotic control focuses mainly on the planning aspect of a robot’s behavioural cycle. The robot senses its environment, plans its next action based on these senses, and then takes appropriate action using available actuators. At every stage, the robot explicitly plans its next action from the the knowledge it has gathered about the environment so far. Essentially these robots are reflex agents [1], selecting actions from rule matches on the current ‘perceptions’ from sensory input.
226
J. Simpson et al. / Mobile Robot Control
This approach traditionally employs a top-down analysis of the desired behaviour of the robot during the design phase and then the implementation of a sequence of modules. These modules work to read values from the sensors available to provide data about the environment (perception), devise strategies to perform the desired behaviours given the environmental state (cognition), and then compose the signals that control the actuators to achieve those behaviours (action).
Figure 1. The typical structure of a hierarchical robot control system
Unfortunately, this approach, using a top down design and sequential modules does not encourage a separation of concerns, and can introduce dependencies between functional layers, especially where feedback loops are used or output monitoring is required. Brooks [2] identifies a different model whereby cognition can be observed simply using perceptive and action systems that interact directly with each other in a feedback loop through the environment, effectively a behavioural paradigm for control. He called this biologically-inspired model of robotic control the subsumption architecture. 1.2. The Behavioural Paradigm Behavioural control is focused around the idea of removing centralised control structures and instead linking actions directly to changes in the input sources themselves. This is an approach most fully demonstrated by Valentino Braitenberg’s Vehicles [3], a set of sensors and actuators connected almost directly together in various combinations to display emergent behaviours that mimic more complex human actions like love, aggression and cowardice. Emergent behaviour can often occur from the interaction of simple behaviours combined with the complex environment, and being able to take advantage of these emergent behaviours can make the task design of robotics systems simpler and the code involved more robust. Behavioural systems often employ a set of pre-programmed condition-action rules which are run concurrently over the inputs, and the system has little internal state. Development using this architecture can map well to occam-π robotics, but it can make development of specific and complex robotic controls hard, due to the requirements of responding to changes in the environment for changes of behaviour. 2. Brooks’ Subsumption Architecture The subsumption architecture involves building robot control systems with increasing levels of competence. Each additional level builds upon and potentially interacts with the inputs and outputs of existing, previous levels to add higher levels of competency, leaving the lower levels intact, functional and operational within the overall system.
J. Simpson et al. / Mobile Robot Control
227
Levels are constructed from components which make up the architecture as a whole. These components are referred to as ‘modules’, and consist of small asynchronous processors, sending messages over connecting ‘wires’ which have a single element buffer. Inputs to these modules can be suppressed and their outputs can be inhibited by wires from other modules. Unlike occam-π channels, these wires are assumed to have frequent message loss due to the subsumption/inhibition mechanisms and as such a single element buffer provides constant access to the last successfully received value from an input line. 2.1. Suppression Suppression is achieved by connecting an additional wire to the input of a ‘suppressed’ module. Inputs received along this additional wire are sent to the module as replacement input for its usual input channel and other data inputs are ignored whilst the suppression occurs. The period for which this secondary input channel takes precedence is specified in the time constant of the suppression. This process essentially replaces all other inputs to the module with input coming from the ’suppressing’ module.
Figure 2. Suppression, whilst seeking giant treacle puddings
To explain suppression, assume our robot has a seek.giant.treacle.pudding module, which is used to seek out pre-made, giant puddings (as shown in figure 2). If our robot managed to find a massive treacle pudding, it would then suppress any outputs from our avoid.walls module, as our robot no longer needs to avoid walls... because it now possesses a giant treacle pudding, and its task is complete.
228
J. Simpson et al. / Mobile Robot Control
2.2. Inhibition When inhibiting a module, a wire from the ‘inhibiting’ module which will control the inhibition is connected to the output site of the ‘inhibited’ target module. If anything travels along this wire, output from the target module will be blocked, and the output is lost for the duration of time specified by the inhibitor. Inhibition is useful for disabling specific behaviours where their activity at a particular time or circumstance is undesirable. Additionally, inhibition can be used on module outputs where suppression is taking place. If the wire in question is suppressing a behaviour that we desire from another level this will allow it to break free from the control of the suppressing module.
Figure 3. A visual representation of inhibition
For example, as can be seen in figure 3, our treacle-making robot might discover a pile of sugar near a wall. In this situation, a seek.sugar module might inhibit the outputs of the avoid.walls module to the motors to avoid movement being triggered whilst it collects sugar, even if the current position of the robot is near a wall. 2.3. Subsumption Use of the subsumption architecture means that a basic control system can be established for the lowest hardware level functionality of the robot and additional levels of competence can be built on top, with little change required to the existing modules and layers. With correct use of suppressors and inhibitors, the system can vary between several different modes
J. Simpson et al. / Mobile Robot Control
229
of operation depending on inputs, making the best re-use of already written and debugged modules in existing levels whilst doing so. For example, a path planning module can coexist with a random heading generator that would otherwise generate a ’wandering’ behaviour for the robot. Output from the path planning module could be used to suppress outputs from a random path generator module, allowing it to take control of the robot’s motion. This would mean the a robot could establish a target location after exploring an environment and then head towards it. This style of control can be likened to the biological idea of reflex and cognitive actions. Reflexes occur quickly to protect the body, without any cognitive input before they occur. A simplistic base level of behaviours can simulate this in providing a computationally cheap and prioritised protective layer of functionality for the robot. Above this simulated level of reflexes, more complex behaviours can be added that will fall back if there are not appropriate actions or outputs available. By using separate inhibitors and suppressors the behavioural modules are isolated from the interaction of the different layers of the system. These modules can be debugged and stay static, making them robust even as the system grows around them. 3. The Subsumption Architecture and occam-π A number of the concepts in Brooks’ subsumption architecture bear considerable resemblance to primitives or specific abilities of the occam-π language [4]. The processor ‘modules’ that make up the system have four specific states that perform different operations, such as sending a message on an output line or making a calculation based on input. These states are switched between to determine the behaviour of the module. occam-π provides more flexibility than Brooks’ processes in allowing the definition of processes with arbitrarily complex behaviour. Implementation of the simplistic operations comprising individual modules in the original subsumption architecture is straightforward using the occam-π language. Where the subsumption architecture has lossy/unreliable wires, occam-π gives us full communication channels with reliable message delivery. Brooks worked around the unreliability of ‘wires’ in his implementation by using single item buffers to allow access to the last received value on a wire at any given time, ensuring that modules can always execute. If we wished to simulate this behaviour in occam-π we could build a single item buffer module to allow values to be read at any time on a channel, but it is for the most part more desirable to benefit directly from the reliable communications provided by occam-π. Suppression and inhibition in the original subsumption architecture are performed directly at the input and output sites of wire connection, but this is not possible in occam-π. However by modelling these actions as processes, increased transparency is brought to the network and indeed the network diagrams in Brooks’ own report separate these two actions into distinct elements. 3.1. Suppression Suppression is achieved by inserting a process (as shown in figure 4 on the following page) between two communicating processes that can also receive input on a third suppress channel to control the suppression. Under normal conditions, inputs received by the suppress process are routed from in to out. The first input received on the suppress channel whilst the process is in a non-suppressing state triggers suppression for the length of time specified in the time constant given to the process. When suppression is taking place, inputs from the suppress channel are routed to the out channel. Subsequent values received on the suppress channel do not reset the time-out value on the suppression process, although once the process switches back to normal, continued inputs will initiate the suppression once more.
230
J. Simpson et al. / Mobile Robot Control
PROC suppress.int (VAL INT timeout, CHAN INT suppress?, in?, out!) TIMER tim: INITIAL INT time IS 0: INITIAL BOOL suppressing IS FALSE: INT value: WHILE TRUE PRI ALT NOT suppressing & suppress ? value SEQ suppressing := TRUE tim ? time time := time PLUS timeout out ! value NOT suppressing & in ? value out ! value suppressing & tim ? AFTER time suppressing := FALSE suppressing & suppress ? value out ! value suppressing & in ? value SKIP : Listing 1. A process providing suppression for a channel of integers in occam-π.
Figure 4. The suppress.int process, which provides suppression on an occam-π channel of integers.
Figure 5. The inhibit.int process, which provides inhibition on an occam-π channel of integers.
3.2. Inhibition Inhibition can be achieved by placing a process between the target module and a module reading output from it. This process also has a second channel, for receiving signals to indicate when inhibition should take place, and is initialised with a time constraint parameter that determines the amount of time that inhibition will occur for once triggered. Each time a value is sent on the inhibition control channel, the time before the process will stop inhibiting is reset. This process is slightly different for each type it must inhibit, and the code for a version of this process inhibiting integers can be seen in listing 2. 4. Robotics with occam-π A language like occam-π has a natural place in the world of robotics, and in the past its use has been explored on small platforms like the LEGO Mindstorms [5]. In this paper our ex-
J. Simpson et al. / Mobile Robot Control
231
PROC inhibit.int (VAL INT timeout, CHAN BOOL inhibit, CHAN INT in?, out!) TIMER tim: INITIAL INT time IS 0: INITIAL BOOL inhibiting IS FALSE: INT data: BOOL flag: WHILE TRUE PRI ALT inhibit ? flag SEQ inhibiting := TRUE tim ? time time := time PLUS timeout inhibiting & tim ? AFTER time inhibiting := FALSE NOT inhibiting & in ? data out ! data inhibiting & in ? data SKIP : Listing 2. A module to allow inhibition of an occam-π channel.
perimentation takes place on a Pioneer 3 robot1 . The Pioneer 3-DX, produced by ActivMedia Robotics, has two wheel differential drive, sixteen ultrasonic range-finders arrayed around its circumference, and a high resolution laser range-finder, which provides centimetre resolution to an eight meter distance in a forward-facing, 180-degree arc. Inside the particular robot used in our experiment is a 700MHz PC104 board running Debian GNU/Linux. 4.1. Player/Stage There are several ways to program a robot like the Pioneer 3. First, it is possible to forego the embedded PC104 and program directly against the robot’s hardware control board, connected to the PC via a serial port. Second, the manufacturer provides an object-oriented API (accessible from C, C++, Java, and Python), called ARIA [6], which provides a control interface for all of their robotics platforms. Third, and most interesting, is the open-source Player API—a cross-platform robotics API written in C/C++ [7]. Player is interesting as it provides an abstracted driver interface for motors, sensors, and other devices typically found on a robot, allowing control logic to be ported easily from one robotics platform to another whilst minimal modification of applications on the part of the developer. Additionally, it is built as a client/server application, meaning code written against the client library might then be run on a remote desktop PC, while the server runs on a robot connected via ethernet or a serial port. This separation also makes authoring a graphical simulator significantly easier; currently, there are two that ship with the Player library. The first is Stage, a 2D simulator capable of displaying dozens of robots simultaneously; second is Gazebo, a 3D simulator which provides a virtual world complete with accurate physics for more detailed testing of control algorithms. 4.2. Player/Stage and occam-π Player, like many other robotics control libraries, is written in a sequential language, with no abstractions provided to aid the programmer in dealing with the concurrent programs that 1
Our development work relies heavily on simulation, with testing being carried out on real robots.
232
J. Simpson et al. / Mobile Robot Control
must necessarily be written to control robots engaged in interesting tasks. Player exposes a single control loop, implying that programmers must write their own multi-threaded applications, and be continuously aware of timing issues in polling the driver. Solutions of this nature are often fragile in the face of race hazards and deadlock. To make the Player library safer for use in control system structures that have the potential to be massively concurrent, we have wrapped the library using our SWIG wrapper interface generator [8]. This allows us to access the C-library directly from occam-π programs running on the Transterpreter [9], a portable run-time for the occam-π programming language. Just making Player available as a foreign library is not enough, however. A small accompanying library, written in occam-π, provides a process-centric interface to the underlying C API [10]. This combination of an occam-π process-centric interface and library can deliver data between 50 and 60 times faster than the update speeds of the sensors available when running in basic process networks. The end result is a portable, thread-safe robotics library that allows us to develop code on any robotics platform that the Player API has been ported to, of which there are many.
5. Robot Control with the Subsumption Architecture and occam-π To explore how the subsumption architecture becomes one process network layered on top of another in practice, we developed a simple robot control program [11] that has multiple behaviours and two levels of competence. At its first level of competence, the robot avoids colliding with objects it can see using its laser range-finder, wanders an environment and pivots backward away from objects it detects. The second competence level is added such that when the robot is backing up, it will check the distance behind itself using the four central sonar on the back of the Pioneer, and instead of continuing to back up, will go forward to give it room to complete the turn, whilst still not colliding with objects in either direction. As discussed previously, the robot’s laser range-finder is forward facing and covers a 180 degree arc, meaning that the sonar array must be used for the second level of competence. This example demonstrates the use of both shared and multiple sensor inputs to the control program, and also shows the behaviours that can be achieved by mixing inhibitors and suppressors even with simplistic modules. Although implementing these behaviours explicitly could be more concise [5], we believe the subsumptive approach can be made to scale to increasingly sophisticated behaviours where a direct implementation cannot. 5.1. Infrastructure Critical to the operation of our robot is the occam-π Pioneer Robotics Library [10]. In particular, it exposes a series of brain.stem processes which can be used to interact with the robotics library. The laser data channel carries an array of 180 integers ranged [0-800] and the sonar data channel carries an array of 16 integers ranged [0-500], both of which are distances in centimetres. In our example, we declare the end of these channels SHARED to enable multiple processes from different levels of the architecture to get access to the data. The motor control channel takes a PROTOCOL of three integers representing the speed of the robot in the X-axis, the Y-axis, and its rotational velocity; in the case of our particular robot there is no Y-axis (the robot cannot scuttle sideways). These control commands are abstracted over by the motor process which takes in a channel of integers, mapped to constants for convenience (e.g. motor.stop, motor.forward, etc.).
J. Simpson et al. / Mobile Robot Control
233
Figure 6. A process network diagram for a robot that avoids colliding with objects whilst also wandering and turn away from objects it encounters.
5.2. First Level of Competence The first level of competence has two main behaviours and is shown in figure 6. One of its behaviours is to avoid colliding with any objects it can see with the laser range-finder. This behaviour keeps the robot from colliding with objects and acts essentially as a protective reflex, taking action regardless of whatever else the robot happens to be doing. To do this, it uses a combination of two processes: min.distance and prevent.collision. The min.distance process reads through the entire array of laser data as each set of data arrives, and sends the minimum values out on a channel of integers to the prevent.collision process. PROC prevent.collision (CHAN INT distance?, CHAN INT act!) WHILE TRUE INITIAL INT min IS 0: SEQ distance ? min IF min < 20 act ! motor.stop TRUE act ! motor.forward : Listing 3. prevent.collision, a base level behaviour to prevent the robot from colliding with objects in any direction.
prevent.collision watches for objects using the laser range-finder. If the value received from min.distance is less then 20cm then an object is ‘seen’ by the robot, and a motor.stop message is sent to the motors. If no object is observed, then the process sends a motor.forward commands to the motors, meaning the robot can recover from being halted if the environment subsequently changes. The second behaviour at this level is that the robot will pivot backwards whenever it detects an object in front of it. Using the laser.data channel, another process detect.object reads through the central 90 degrees of the laser array containing each scan, and looks for any obstacles closer than 75cm. Upon processing an entire scan, the process sends a boolean value on the object channel indicating that it has detected an object in the robot’s path. The pivot process sends a back up command over the suppress.act channel to the motor process whenever a signal is received on its own object channel. It does nothing otherwise, as outputs from the process control the suppression line of suppress.motor. The avoid collision, wandering and pivot backwards behaviours are connected by a suppressor, suppress.motor which is the same as the suppress.int process seen in figure 4 on page 230. This means that when the pivot process is active, motor commands from
234
J. Simpson et al. / Mobile Robot Control
PROC pivot (CHAN BOOL object?, CHAN INT suppress.act!) WHILE TRUE BOOL is.object: SEQ object ? is.object IF is.object suppress.act ! motor.back.right TRUE SKIP : Listing 4. pivot, a process that turns the robot if an object is detected, or goes forward otherwise to provide ‘wandering’
prevent.collision are dropped (telling the robot to go forward, or stop), and the command to turn right from pivot will be sent instead. The time interval on the suppressor is set to 1000000μ (one second) meaning that the robot will back up for that period before the choice is made again whether to pivot or go forward. When there is a clear path in front of the robot again, the lower level behaviour (of going forward when there is clear space ahead) resumes control of the robot. 5.3. Second Level of Competence Up to this point, our robot can wander in space, turn away from objects and prevent collisions. However, it has a deficiency: it is possible for the robot to back into walls whilst trying to reverse away from objects in front of it, as shown in figure 7. Following the principles of the subsumption architecture, we can add another behaviour that checks whether the robot has space to back up and turn. When there is no space to back up, this behaviour can inhibit the signals coming from the pivot process, and instead allow the motor.forward commands from prevent.collision in the base level through, as shown in figure 9 on the facing page. Causing the robot to travel forward temporarily gives more space for it to back up and pivot into, adding more ‘points’ to the turning motion, but allowing the robot to successfully complete its backward turn.
Figure 7. At the first level of competence, the robot is able to reverse into walls while trying to find clear space.
Figure 8. Demonstrating the second level of competence in action, the robot successfully navigates the environment without running into the wall behind it.
J. Simpson et al. / Mobile Robot Control
235
Figure 9. A process network diagram for a robot control system with two competencies, allowing more successful negotiation of an environment based on both sonar and laser data.
The has.space.behind process uses the middle four sonar sensors at the rear of the Pioneer 3, and checks that there is room behind the robot. The inhibit.pivot is the same code that was introduced earlier as an example of inhibition on occam-π channels. Different delays are used for the inhibitor and suppressor, because the inhibit.pivot must restrain the outputs of the motor.suppress for long enough that the robot can move forward a significant amount. Otherwise, the robot falls into a needlessly long see-saw motion, wobbling back-and-forth when caught between “a rock and a hard place.” After adding this additional layer of competence, it is possible to see in figure 8 on the preceding page that the robot can successfully negotiate the environment whilst not backing into walls. In our example, it is possible to see that the processes originally used in lower levels are maintained and the lower-level system is kept intact. New levels of functionality merely augment the system and improve its overall ability to perform the desired task. These levels can be progressively debugged as they are added, and once debugged can be relied upon by subsequent layers, meaning the system should remain robust even as it grows in size and complexity. 6. Conclusions and Future Work Based on our initial explorations, the subsumption architecture appears to be a natural design paradigm for occam-π robotic control. We can implement desirable, low-level behaviours for our robots, and then extend those networks with higher-level behaviours, using Brooks’ notions of inhibition and suppression. However, further experiments are necessary to convince ourselves of the value of the subsumption architecture as a paradigm for robotic control in the occam-π programming language. Given the example levels of competence presented, it would be useful to investigate creating additional levels for the example presented in this paper. Making these levels modular such that others building control systems in occam-π can make use of them would also seem wise. Having a stable and debugged core of modules for use in developing subsumption architectures in the library that is used with occam-π for Player/Stage would be a useful step forward to promote this approach to control with the language. Developing similar sets of modules for the sensors found on smaller, more commonly available robotics platforms like the LEGO Mindstorms would provide additional opportunities for use of this paradigm for teaching purposes.
236
J. Simpson et al. / Mobile Robot Control
Additionally, implementing Brooks’s subsumption architecture in a manner more closely mirroring its original form, detailed in his technical report [12], would be an interesting challenge to attempt in the occam-π language. Acknowledgements We are very grateful to Damian Dimmich for providing Player client library wrappers, making it possible to program the Pioneer 3 using occam-π and the Transterpreter [9]. References [1] Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach, chapter 2.4, pages 46–48. Pearson Education, 2003. [2] Rodney A. Brooks. Cambrian intelligence: the early history of the new AI, chapter Preface, page xi. MIT Press, Cambridge, MA, USA, 1999. [3] Valentino Braitenberg. Vehicles: Experiments in Synthetic Psychology. MIT Press, Cambridge, MA, USA, 1986. [4] P.H. Welch and F.R.M. Barnes. Communicating mobile processes: introducing occam-pi. In A.E. Abdallah, C.B. Jones, and J.W. Sanders, editors, 25 Years of CSP, volume 3525 of Lecture Notes in Computer Science, pages 175–210. Springer Verlag, April 2005. [5] Christian L. Jacobsen and Matthew C. Jadud. Towards concrete concurrency: occam-pi on the LEGO mindstorms. In SIGCSE ’05: Proceedings of the 36th SIGCSE technical symposium on Computer science education, pages 431–435, New York, NY, USA, 2005. ACM Press. [6] ActivMedia Robotics. Advanced Robotics Interface for Applications (ARIA) Robotic Sensing and Control Libraries. http://www.activrobots.com/SOFTWARE/aria.html. [7] B. Gerkey, R. Vaughan, and A. Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. In Proceedings of the International Conference on Advanced Robotics (ICAR 2003), Coimbra, Portugal, June 30 - July 3, 2003, pages 317–323, 2003. [8] Damian J. Dimmich and Christan L. Jacobsen. A Foreign Function Interface Generator for occam-pi. In J. Broenink, H. Roebbers, J. Sunter, P. Welch, and D. Wood, editors, Communicating Process Architectures 2005, pages 235–248, Amsterdam, The Netherlands, September 2005. IOS Press. [9] Christian L. Jacobsen and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Communicating Process Architectures 2004, pages 99–107, 2004. [10] Christian L. Jacobsen and Matthew C. Jadud. The occam Pioneer Robotics Library. http://www.transterpreter.org/documentation/occam-pioneer-robotics-library.pdf. [11] Jonathan Simpson, Christian L. Jacobsen, and Matthew C. Jadud. A bump and wander robot using the Subsumption Architecture in occam-pi. http://www.transterpreter.org/wiki/Subsumption. [12] Rodney A. Brooks. A robust layered control system for a mobile robot. Technical report, MIT, Cambridge, MA, USA, 1985.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
237
Rain: A New Concurrent Process-Oriented Programming Language Neil BROWN Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NF, England
[email protected] Abstract. This paper details the design of a new concurrent process-oriented programming language, Rain. The language borrows heavily from occam-π and C++ to create a new language based on process-oriented programming, marrying channel-based communication, a clear division between statement and expression, and elements of functional programming. An expressive yet simple type system, coupled with templates, underpins the language. Modern features such as Unicode support and 64-bit integers are included from the outset, and new ideas involving permissions and coding standards are also proposed. The language targets a new virtual machine, which is detailed in a companion paper along with benchmarks of its performance. Keywords. Process-oriented programming, Concurrency, Language design, Rain
Introduction Historically, desktop computing has been completely dominated by single-CPU, single-core machines. This is now changing — Intel and AMD, the two giants of desktop processor manufacture, both have a multi-core processor as their central market offering. It appears that the new dawn of parallelism has finally arrived, forced by the slowdown in the exponential growth of processor clock speeds; the race to increase the gigahertz has been replaced by a race to increase the cores. Programming languages have not yet caught up to this shift. C and C++, still two of the most popular mainstream languages, completely lack any support for concurrency at the language level. Java has threads and monitors built-in to the language but using these for practical safe concurrency is not easy. The primary language with strong safe support for concurrency built-in is occam-π [2], a very different language to the C/C++/Java triumvirate. Despite many innovations and developments, the level of abstraction of programming languages has moved at a glacial pace over the past sixty years. Broadly speaking, the progression has been: machine code, assembly language, imperative procedural languages (e.g. FORTRAN, C), and object-oriented languages (e.g. C++ 1 , Java). It is my hope that the next step in that chain will be process-oriented programming. History suggests that in order for this to happen, the change will have to be made in small increments. The gap between, say, Java and occam-π is vast in every respect. occam-π has no objects, collection classes (or built-in collection data types) or references and has a totally different syntax. occam-π encourages parallel code and makes use of channels for communicating between processes, rather than method calls between objects. From a practical business per1 Although C++ is technically a multi-paradigm language, by far its most common mode of use is, in effect, object-oriented C
238
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
spective, the differences mean that re-training would be necessary and much existing/legacy code would have to be re-written. Libraries such as JCSP [3], CTJ [4] and C++CSP [5] offer a bridge between current popular languages and fully process-oriented programming. However, these libraries suffer from the limitations of the language they are written in. Despite all the efforts of the library developers, programmers will always be able to write unsafe code using these libraries. For example, two C++CSP processes can both have a pointer to the same data structure which they can freely both modify concurrently, each overwriting the changes of the other. Such problems must be addressed at the language-level if they are to be eliminated. Groovy Parallel [6] is an example of a project that helps to bridge the gap between mainstream languages and easy process-oriented programming. It is compatible with Java at the byte-code level. However, it still uses the Groovy language — unaltered — as its base. This means that the problems described above with JCSP et al. apply to Groovy Parallel. Honeysuckle [7] does solve these problems at the language level but is headed in a novel direction that diverges from some of the central concepts of process-oriented programming (such as channel communication). I propose the development of a new process-oriented programming language, Rain. The language can be used on its own but will also be able to interface with C++CSP. This will allow existing C++ code to be used together with Rain code, with channels linking the C++CSP and Rain processes. The language will build on process-oriented programming and add new features such as templates and permissions (described later in this paper). The design of Rain is detailed in the remainder of this paper. The Rain language is intended to follow the write-once run-anywhere pattern of Java and other interpreted languages. This will allow it to take advantage of heterogeneous concurrency mechanisms across multiple architectures without any code changes. This is described in detail in the accompanying paper about the Rain Virtual Machine (VM) [1], which also describes the C++ interface. The paper also provides performance benchmarks.
1. The Role of the Compiler Studies carried out on the process of programming have shown that the earlier in the development/release cycle that a bug is caught, the less the cost to fix it [8]. Even if a test-first methodology is used, the effective development flow for the actual code is of the form: editor, compiler, unit-tests, system tests. Language-aware editors, usually present in Integrated Development Environments (IDEs) such as Eclipse [9], can help to highlight syntax errors before compilation takes place. The compiler can then spot some errors before transforming the code, and running it through the unit-tests which should (hopefully) catch any semantic errors, assuming that coverage is complete — which it rarely is, due to the difficulty of achieving complete coverage. Different compilers can detect a widely different range of errors. An assembly language compiler can spot syntax mistakes, but as long as the operation is a valid one for the CPU, no other errors will be issued. Compilers for languages such as C++ and Java can pick up type errors and errors such as the potential use of a variable before initialisation. At the other end of the scale, interpreted languages have no compiler to speak of (other than the syntax-checking when loading a source file) and hence the compiler will also issue no errors. Errors such as type errors, unsupported interface methods and function/variable name typos, all picked up by the compiler in a language such as Java, will be detected by the tests. If the test coverage is incomplete then the errors will remain in the final code. Given that the compiler is at such an early stage in development, and can be made powerful enough to detect a large proportion of common programming errors, it seems wise to
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
239
do just that. The compiler should eliminate as many potential errors as it can. This will need to be a combination of language design and compiler implementation. In a presentation [10], Tim Sweeney of Epic Games (games programming typically being an area for the C/C++/Java/C# family of languages) provides a four-line C# function with five possible failures, followed by a semantically identical three-line pseudo-Haskell function with one possible failure. Solely by a combination of the language and the compiler, the same algorithm is made safer. Games programming has always been focused on the use of C-like languages for performance, but even in this domain it seems that higher-level, safer, expressive languages are seen as the future2 . The Rain compiler will try to detect and issue compiler errors for as many potentially dangerous code patterns as possible. If the safety of a given piece of source code is unclear, in particular issues such as enforcing Concurrent-Read Exclusive-Write (CREW), the compiler should adopt a pessimistic (least-permissive) approach. Apart from this being the safest approach, in practical terms it is better for a future version of the compiler to accept a superset of the programs accepted by the current compiler rather than a subset — the latter option leading to non-compilable legacy code. 2. Processes and Functions There are two major units of program code in Rain; processes and functions. A process is equivalent to a statement: it can affect and access external state (e.g. via channels), but it does not return a value explicitly — although using a channel it can accomplish a similar effect. A function is equivalent to an expression: it cannot affect or access external state (only its parameters) and will always return a value — otherwise it is a null statement from the caller’s perspective. This means that functions can have no side-effects, and because they never depend on external state, only their passed parameters, they always have the potential to be inlined. Functions are permitted to be recursive. It is hoped that this will allow a marriage of process-oriented and functional programming, with a cleaner syntax for the latter than occam-π. Although occam-π also contains this functions-as-expressions concept, it does not allow for recursive functions, nor the communication of functions (expanded on in section 4.4). Functional languages have always had problems with any concept of I/O or time, due to their lack of state. This has made interaction with user interfaces or networks difficult — things that process-oriented languages can excel at. I/O can involve burdensome contortions such as Monads [11]. It is intended that the combination with process-oriented programming should remove such deficiencies of functional programming. 3. Communication Like occam-π, Rain contains three communication primitives; channels, barriers and buckets. One-to-one unbuffered channels are available. Due to the anticipated potential implementation difficulties in the virtual machine no guarantees are offered regarding the availability buffers, any-to-one modality etc. Like C++CSP and occam-π, Rain has the concept of both channel and channel end types. Channels cannot be used for anything except for accessing their channel ends. Channel ends may be used to read or write depending on the end they represent. There are some complications to this however: poisoning, ALTing, and extended rendezvous. 2 Although it is beyond the scope of this particular paper, Sweeney goes on to outline concurrency problems in game programming, which may well be of interest to the reader — including his opinion that “there is a wonderful correspondence between features that aid reliability and features that enable concurrency”.
240
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
3.1. ALTing and Extended Rendezvous ALTing, the term often used to describe that what East [7] terms selection (to distinguish from alternation), is a very powerful construct. Extended rendezvous (also referred to here as extended input) is an occam-π addition made by Barnes and Welch [12]. It essentially allows the reader to block the writer until the reader has completed further actions. This allows buffering/tapping processes to become essentially invisible to either end. Not all channels that Rain is intended to support will be internal. Some channels may be networked, and some may be external channels such as stdin and stdout or plain sockets. Network channels may support extended rendezvous, but not ALTing. stdin may support ALTing, but extended input would not be possible (or applicable). This leads to the idea of channel-ends supporting a particular operation. In object-oriented programming this would be construed as a channel supporting a particular interface. Some processes will require ALTing and/or extended rendezvous on their channels, and some will be indifferent to the support. For example, an extended-id process (that behaves like id, but with an extended input) will naturally require its reading end to support extended input, but will be indifferent to ALTing support on the channel. A two-channel merger process (that takes input on either of its two reading channels and sends it out on one output channel) would require ALTingbut not extended input. Rain includes the concept of different channel input ends; the programmer can specify that a channel-reading end must support extended rendezvous or ALTing. If no such specifiers are included, such support is presumed not to be present. That is, by default the channelreading end type (e.g. ?int) is assumed to not support anything other than normal input. 3.2. Poisoning The Communicating Process Architectures (CPA) community seems to be divided over the matter of stateful poisoning of channels [5,13,14,15]. Having seen its utility in programming with C++CSP, I chose to include it in Rain. At the suggestion of Peter Welch, C++CSP was modified to include channel ends that were non-poisonable. That way, programs could hand out channel ends that could not be used to poison the rest of the network; for example the writing ends of an any-to-one channel that fed into an important server process. While any-to-one channels are currently not featured in Rain, the general logic prevails, and non-poisonable channel ends are explained below. The idea of interfaces for channels could potentially be extended to distinguish between poisonable and non-poisonable channel ends. Consider the process id — there would need to be a version for poisonable channels and one for without: process id_int (?int:in, !int:out) { int : x; while (true) { in ? x; out ! x; }
}
process id_int_poison (poisonable ?int:in, poisonable !int:out) { { int : x; while (true) { in ? x; out ! x; } } on poison { poison in; poison out; } }
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
241
If only in was poisonable, and out were not, then another version would be required. Likewise if it were reversed. This approach would clearly be untenable. The solution is therefore that channel ends are assumed to be poisonable. Poisoning a non-poisonable channel end simply has no effect. Note that non-poisonable channel ends can still become poisoned by the process at the other end. So the id_int_poison process above would be the correct implementation (less the poisonable key-words) in order to catch the poison exceptions.
4. Types Rain is strongly statically typed. Dynamic typing saves initial programmer effort, but experience has shown that this usually returns to haunt them in the form of run-time type errors. In line with the earlier discussion on the role of the compiler in section 1, it is preferred that the compiler do more work to save later problems. Variables must be declared before use, with a specific type. They can be declared as constant. All function and process parameters are considered constant for their entire scope in the function/process. This saves confusion caused by reassigning function parameters. Descriptions of the types can be found in the following sub-sections. 4.1. Primitive Data Types Currently, mass-market computing is undergoing a transition from 32-bit to 64-bit architecture. Therefore, Rain includes 64-bit integers. I anticipate that larger integers will not be necessary in future. While similar phrases (usually foolish in retrospect) have been uttered in computing over the years, I am willing to state that I believe that 64-bit integers should be enough for most uses3 . To avoid the horrid conventions now present in other languages (for example, the long long in C++), integers are labelled with their size and signed modality. So the full set of integers are: sint8, sint16, sint32, sint64, uint8, uint16, uint32, uint64. int is also a built-in data type, and rather than adopt a sliding scale as per C/C++, it is defined to be an sint64. It is assumed that int will be used for most data (given that it will soon be the default word size on most machines), and the other types will be used when dealing with data that must be written on a network or to disk and hence requires a specific size. The language currently offers real32 and real64 floating-point types. It is anticipated that the 128-bit version will be added in the future. 4.2. Communication Data Types Rain contains channels, channel-ends, barriers and buckets. Channels and channel-ends have a specified inner type, and channel-ends have a specified modality (input or output). Channels, barriers and buckets are all always constant. Making them constant prevents aliasing and also provides a clear point to allocate (upon declaration) and deallocate (at the end of its scope). 3 For observed quantities larger than 264 the purpose is likely scientific computing, which will probably be using floating-point numbers and/or a natively compiled language. The other common use of integers in computing is to store exact frequencies (having an incremented unique identifier in databases is effectively the same thing). Even if a new event occurred every nanosecond, it would still take over 500 years to overflow the 64-bit integer.
242
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
4.3. Complex Data Types 4.3.1. Named Tuples Tuples in languages such as Prolog suffer from a lack of scalability. From experience, programming with a nine-member tuple and always remembering the right field order (in a dynamically typed language, to add to the problem) is a difficult task. Rain therefore offers what are referred to here as named tuples; a combination of records and tuples, almost a return to C structs: typedef (real:x,real:y,real:z) : Point; Point: p; p = (0,1,1); p.x = 1; p = Point(z:0,y:0,x:0); The tuples can be used either as tuples, or by named member access, or a combination of both in manner similar to Visual Basic’s named parameter list. 4.3.2. Lists C++ and Java did not build their main (non-array) list types into the language, but rather provided the language with tools from which the standard collection libraries were defined. This usually makes constructing these data structures difficult because there is no easy syntax. To define a list of the 3-D points (0,0,0) and (1,1,1) in Prolog would mean writing: [(0,0,0) , (1,1,1)]. In C++ the code would normally be along the lines of: vector v; v.push_back(Point(0,0,0)); v.push_back(Point(1,1,1)); Even accounting for the extra type information, this is an awkward way of creating a list. Only the fiendishly clever Boost [16] ‘assign’ libraries help alleviate this problem for arbitrarily sized-lists. Java 1.5 introduced a type-safe form of varargs to deal with this problem [17]. Rain also offers list types. In C, C++ and Java arrays and linked lists are two very different things; one was in-built and the other an object (or C’s equivalent, a struct). Prolog only provides one list type, as does Python. Rain offers one list type, the underlying implementation of which (i.e. array or linked list) is guessed for best performance. Programmers may override this if desired. typedef [Point] : PointList; PointList: ps; ps = [ (0,0,0) , (1,1,1) ]; ps += (2,2,2); Lists support the addition operator (and therefore the += operator) but no other operators. 4.3.3. Maps and Sets The two other commonly used data structures are maps (key/value structures) and sets. A set can be treated as a map with an empty value for each key. As such, only maps are builtin to Rain. Sets are provided by using an under-score as an empty data type. Maps support insertion/over-writing (through the assignment operator), removal, direct element access by key, presence checks and iteration.
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
243
Insertion/over-writing is guaranteed to always succeed. Removal of a non-present element is not considered an error and will have no effect. The only dangerous operation is element access (i.e. if the specified key is not present in the map) — this is covered in section 7 on exceptions. typedef < int : Point > : NumberedPoints; typedef : PointSet; NumberedPoints : n; PointSet : s; s = _; remove s; n = (1,0,1); if (n has 4) s< n > = _; 4.3.4. Variants Named tuples, described above, are an example of product-type. Their complement is the sum-type, typified in a dangerous manner by C’s union type; unions are dangerous because they do not keep a record of what their currently stored type is. Therefore a currently invalid type in a union can be accessed. Many languages also supply types often referred to as enumerations. These are types consisting of a small closed set of values, identified by a meaningful name rather than merely a number. Their advantage over using integers with a set of constants is that the compiler can ensure the full set of values is handled in switch-like statements, and that the constants from two different types are not mixed (for example, a file error constant is not used in place of a GUI error constant). In occam-π, similar ideas are combined to form variant protocols — typically an enumeration-like constant preceding a particular type, the result being somewhat similar to a safe union. In occam-π, variant protocols only exist in the form of a communicable type. In Rain, it is intended that these structures be a standard data type, usable wherever other data structures (lists, maps, etc) are. Haskell contains a similar concept of data with field labels. This idea requires further consideration before being finalised however. 4.4. Processes and Function Types Processes and functions can be stored and are therefore data types. Their type includes their parameter list (which itself is a named tuple type). Processes do not carry with them any state (which would be a much more complex mobility discussion [18]). In effect, an instance of process or function data is merely a code address. Therefore they can be assigned at will, but can never contain an invalid value (the compiler can mandate this). Process and function data types can be transmitted over channels. This opens up interesting code patterns. Consider a filter process, with an incoming and outgoing channel of type int, and an incoming channel of a function that takes a single parameter of type int and returns a boolean. The filter process would accept an input on its channel of int, and send out the values if the function returns true — a dynamically configurable filter. The process would also accept input on its function channel, which would provide a new filter. Example code for the process is given overleaf (without poison handling for brevity):
244
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language process filter (?int : in,!int : out, ?function bool:(int) : filterIn) { int: t; function bool:(int) : filter; filterIn ? filter; while (true) { pri alt { filterIn ? filter {} in ? t { if (filter(t)) out ! t; } } } }
4.5. Composing Data Types Data types in Rain are compositional, with a few necessary restrictions. Tuples and lists can contain any types. Maps can have any value type, but the key type must support ordering. Maps, communication primitives, processes and functions do not support ordering. Lists and tuples support ordering if (all of) their inner types do. All types support equality comparison. Channels and channel-ends are a problem, as channel-ends must obey CREW. Consider the following code: chan int: c; ?int : in; [ ?int ] : a,b; in = c; a = [in]; b = [in]; seq all (x : a) {b += x;}
#1 #2 #3 #4 #5 #6 #7 #8
It would be relatively easy to spot the potential CREW problem (that is, the possibility of the non-shared channel end being used twice in parallel via the lists) on line 6. However, even if line 6 was removed, spotting the problem on same line 8 is harder. Situations of greater complexity are also imaginable. Therefore, adopting a least-permissive approach, channels and channel-ends cannot be contained in other data types (nor in other channels). For similar reasons, barriers and buckets cannot be contained in any other data types. 5. Iteration Constructs Rain offers sequential and parallel iteration loops for lists and maps, in the following form: seq all (x : list) { ... do something with x }
par all ((key,value) : map) { ... do something with key and value }
It is inherent in these constructs that all invalid-access problems are avoided (for example, array index out of bounds or map key not being present). These are therefore the preferred
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
245
forms of processing entire collections. Direct access of individual elements by index/key must be guarded by exception handlers as touched upon in section 7 on exceptions. Note that the type of the iteration variables (x, key and value) is automatically deduced from the list/map type (where possible).
6. Templates Consider the archetypal id process (shown without poison handling for brevity): process id (?type : in, !type : out) { type: x; while (true) { in ? x; out ! x; } }
The only thing needed to compile this process is to know what type is. For a valid compilation, type could be any numeric type, a list, a map, or any other type that can be transmitted through channels. Having to write id over and over again for each type would be a nonsense. C++CSP is able to use templates to provide a single definition of a process — this is then compiled again whenever the process is used with a new type. Experience has shown that templates are incredibly useful for creating libraries of common processes (thus encouraging code re-use and reducing programmer effort) and that they are generally a useful language feature. The type is substituted in at compile-time, not runtime, so it is just as safe as if each version were written out long-hand. This is a feature sadly lacking from any other process-oriented language that I am aware of. In Rain, both processes and functions can be templated. Some form of compile-time reflection and/or partial specialisation is intended for inclusion, but the design of that is beyond the scope of this paper. Currently it will simply be plain type substitution. Either all types can be allowed (using the any keyword, or it can be restricted to numeric data types (using numeric). The conversion of id and successor are given below: template (any: Type) process id (?Type: in, !Type: out) { Type: x; while (true) { in ? x; out ! x; } }
template (numeric: Type) process successor (?Type: in, !Type: out) { Type: x; while (true) { in ? x; x += 1; out ! x; } }
In C++, templates are provided by re-compiling the same code with the new type substituted in. This means that the source code is required (by including header files) every time the templated type is used. In Java, generics simply ‘auto-box’ the type, and thus the same piece of code is re-used for all instantiations of the generic object. C++ is able to use a templated type’s properties (e.g. making a method call on it) in a way that Java’s generics cannot without using inheritance. Rain adopts Java’s approach — through the use of type functions in the virtual machine (described in [1]), the equivalent of auto-boxing is performed.
246
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
7. Exceptions Exceptions seem to have found favour in programming language design. C++, Java, the .NET languages and Python all include them. In their common implementation they allow errorhandling to be collected in a single location, and for clean-up (certainly in the case of C++) to happen automatically during stack-unwinding. This is usually done to avoid checking for an error code on every call made. Below, the version on the left illustrates checking every call for an error, whereas the version on the right collects the error handling in the catch block. if (file_open(...) == Error) { ... } else if (file_write(...) == Error) { ... } else if (file_close(...) == Error) { ... }
try { file_open (...); file_write (...); file_close (...); } catch (FileException e) { ... }
C++CSP contained poison exceptions — thrown when an attempt was made to use a poisoned channel. As in the above example, exceptions were the best practical way of implementing stateful poisoning. The common poison-handling code for each process could be collected in one location. There are a number of situations in Rain where the safety of an operation cannot be guaranteed at compile-time. The programmer usually has two alternatives: ensure that the operation will be safe with an appropriate check, or handle the exception. In the non-exception example below (on the left) the compiler can perform a simple static analysis and understand that the array access is safe. In the exception example on the right, an exception-handling block is provided. if (xs.size > 5) { x = xs[5]; } else { ... }
{ x = xs[5]; } on invalid index { ... }
Exceptions, as in other languages, are a mechanism for collecting error handling. The currently intended exceptions in Rain are: invalid list indexing, invalid map access, poison and divide by zero. The exceptions listed are required because of other features of the language. Most languages allow programmers to define, throw and catch their own exception types. This was a possibility for Rain. However, other languages have a strongly procedural basis that fits the perpetuation of thrown exceptions up the call stack. Rain allows functions, but they would definitely not be allowed to throw an exception (as a function call is only an expression). Processes would not be allowed to throw exceptions that can be caught by their parents. Exceptions are also not allowed to be caught outside a par block (due to the difficulties involved with parallel exceptions [14]). In Rain exceptions are a highly localised affair, and therefore do not have their procedural-nesting usefulness that they do in languages such as C++. Therefore the programmer is provided with no mechanism to define their own exceptions in Rain. Error messages in process networks can either be perpetuated using error messages carried over channels or by using poison. It is up to the programmer which is used but the intention behind their design is thus. It is intended that poison be used only for unrecoverable errors and halting the program. For example, a process that is used to load up a file (the name of which is passed as a parameter) and send out its contents, should poison its channels if it discovers that the file does not exist. Without the file, the process has no purpose. By contrast, a process that has an incoming channel for file-names, and an outgoing channel for byte lists (of the file contents) should use error messages. If a requested file does not exist, the process
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
247
is still capable of continuing to function when processing the next requested file. 8. Text and Unicode Unicode [19] was created to allow all known language characters to be stored as numbers, and yet still leave space for more in the standard. Unicode is available in a number of different encodings. Unicode prompts two considerations from the point of view of implementing a programming language; the compilation of source files, and support within the language itself for Unicode. Support in the language itself is the trickier issue. Either a dedicated string type can be created and built-in to the language (as in Java), or some construction like a list/array of bytes (as in C) can be used. Given that a dedicated type would be stored as a list of bytes underneath the differences are really: how built-in to the language it should be, and what encodings should be offered/used by default. Consider a program that takes a list of characters and sends them out one at a time on a channel. In ASCII, this would be done by accepting a list of sint8, and outputting a single sint8 at a time. In UTF-8, this must be done by accepting a list of sint8, and outputting a list of sint8; one character can be multiple sint8 bytes. Naturally, the temptation is to use an encoding where all characters are the same size. Technically, this is UTF-32, where every character is exactly four bytes. However, Unicode characters outside the two-byte range are quite rare. Java originally picked a two-byte character size. At the time this was enough to hold all the planned Unicode characters [20]. Now that characters can be larger, the simplicity of having a character type that can hold all characters has been lost. Some believe that Java should resize its character class accordingly [21]. The decision of how large to make the default character type is therefore a trade-off between space efficiency (due to cache hits/misses, this also has an effect on performance) and the problems that would occur when encountering rare characters. The Unicode FAQ provides no specific guidance [22]. For simplicity of use, I have decided to use 32-bits as a character type. Strings, by default, will be a list of 32-bit values (therefore strictly one per character). The type string will be in-built, and exactly equivalent to [uint32]. Library functions and processes will be provided to aid conversion between encodings. Rain source files are assumed to be UTF-8, although this will be made configurable via the command-line. White-space is used to separate variable names — therefore any nonwhite-space Unicode characters are valid in variable names. Any UTF-8 characters in a string literal between quotes " " will be converted and stored as UTF-32 for use as constant literals in the program . 9. Data Transmission Concurrency has always been at odds with aliasing. Where aliasing is allowed in a program, two concurrent processes can have an alias for the same object, and CREW can be broken. Process-oriented languages have tried to prevent this by treating all data as private to the process. This presents an efficiency problem when large data structures are sent between processes. occam originally took the approach of always copying. occam-π introduced the concept of mobiles — essentially a non-aliasing reference [23]. Honeysuckle introduced the idea of ownership, a similar idea. Mobiles, always-copy and duplicate-on-modify (allowing a reference to be shared as read-only, creating a new localised copy when it is modified) are three acceptable semantic solutions to the problem. The main question is whether to expose the dilemma to the
248
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
programmer, or hide the detail from them. Mobile data provides problems for the programmer (the potential for dereferencing an undefined mobile) and no benefits besides efficiency. Therefore including the idea of mobiles but hiding this detail from the programmer seems wise. The compiler will use heuristics to decide which semantics to adopt as the underlying mechanism. From the programmer’s perspective, it must appear as if always-copy is being used. Consider the following producer process: seq all (int x : [0..100]) { [int]: list = [0..x]; out ! list }
The list is never referenced after its communication. Therefore in this example, mobile semantics would be wisest. Of course, the process on the other end of the communication needs to know whether it has received a reference that is duplicate-on-modify or not (from the receiver’s perspective, a mobile reference is no different from a copied reference, as the receiver “owns” it). This involves a small amount of dynamic typing on the implementation side. Given that the programming language is being compiled to a virtual machine [1], the virtual machine can enforce the correct semantics. 10. Permissions One of the challenges of programming is keeping on top of the design of a system. Even with one programmer this can be difficult; with multiple programmers and distributed teams it can become a nightmare. The compositional hierarchical designs that process-oriented programming naturally favours can help to alleviate this problem. Small components are easily composed into larger ones, and the interactions between the larger components (in the form of communication) are all visible from the parent component. However, if left unchecked, understanding which components are using external services can become difficult. In a language such as Java, any part of a program can open up a socket or a file using the right API, and the calling method can be unaware of it. As a developer it has not been as uncommon an experience as it should be to dig into method call after method call, only to find that one of them is making an unexpected SQL query over a socket, or opening its own log file. In process-oriented programming, such activities can lead to unexpected deadlocks that are not obvious from a system overview. Rain tentatively offers the idea of permissions (for want of a better name — services is ambiguous). A process must list the permissions that it requires. Potential examples include network, file or gui. Processes are permitted to pass on any or all of their permissions to their sub-processes, but permissions cannot be transferred (for example, via a channel). If process PA is the parent of PB , which is the parent of PC , and PC needs to write to a file, PB and PA will also need these permissions in order to pass them on down the hierarchy. This will allow it to be obvious from the top-level process in the program where the permissions are being granted, even if they are simply being granted everywhere (as is effectively the case in other languages). 11. Coding Standards Coding standards have been a source of contention among programmers since they were conceived. Some believe that consistent coding standards for common patterns and naming
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
249
conventions are necessary on large projects to aid readability, lessen mistakes, and speed up programming when using code written by other developers. Others believe that they are a waste of time that only impedes programming. Coding standards can be used because the standard-writer disagrees with a design decision taken by the language designer. For example, a C++ coding standard might disallow the choice operator (a ? b : c) because the standards-writer believes it unwisely hides choices in expressions. Not all programmers will agree with the choices that have been made in Rain. They may prefer forcing variable declarations to be at the start of a block (as in C) rather than anywhere (as in C++/Java). Alternatively, they may want to avoid the use of exceptions as much as possible, and would therefore want to disbar explicit list-indexing and unguarded map accesses (favouring iteration constructs instead). In the future, the compiler will support user-specified warnings and errors on normallylegal program code. The compiler will be able to take two types of inputs; source files, and (likely) XML files containing policy. Naturally the range of errors that can be supported will have to be restricted to relatively simple rules (mainly about syntax uses rather than semantic patterns), but this will allow the programmer to customise the compiler slightly to their wishes — as perverse as asking the compiler for more errors may seem. I expect that some Perl programmers will recognise this latter desire. The idea of a seemingly-reconfigurable language may be alarming at first. The key detail is that these policies are always less permissive than the default. That is, the language with policies (coding standards) in place will always be a subset of the original language.
12. Implementation Progress This paper has detailed the design of the Rain programming language. I have been implementing the compiler alongside the implementation of the virtual machine [1] that forms the target for the compiler. The framework and design of the compiler is complete, and a subset of most of the compiler stages has been implemented. Compilation of very simple programs is now possible. The compiler is targeted at the Windows and GNU/Linux platforms although I expect that it will compile on any operating system with a C++ compiler and a Boost [16] installation. As with the virtual machine, the development has taken place on a mostly test-first basis, with copious unit tests. I believe that this provides a measure of assurance in the proper functioning of the compiler. Looking back, I do not believe I could have come as far without such testing, despite the extra time that it needed.
13. Conclusions and Future Work This paper has presented the design of a new process-oriented programming language, Rain, intended for stand-alone use or integration with existing C++ code. Rain is statically typed with a rich set of data types and aims to eradicate as many errors as possible at compiletime. It offers (along with its VM) portable easy concurrency in a language that will interface well with C++CSP, thereby allowing it to interact with existing C++ libraries; in the future I particularly hope to include support for GUIs and networking, applications well suited to process-oriented programming. Rain includes many innovations not found in other process-oriented programming languages. These include channel end interfaces (e.g. differentiating between channel end types that support ALTing and those that do not), templates and permissions. These should make
250
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
the language interesting to current process-oriented programmers, who it is hoped will find these features useful. It is unfortunate that further progress has not been made on the compiler; the implementation of the virtual machine alongside the compiler has meant that neither are yet finished. Future work will naturally involve, first and foremost, the completion of the compiler. Acknowledgements I would like to thank Fred Barnes, Peter Welch and Ian East among many others for their outstanding work on occam-π and Honeysuckle. I hope they understand that if I step on their toes, it is only in an attempt to stand on their shoulders. Trademarks Java is a trademark of Sun Microsystems, Inc. Windows and Visual Basic are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. Python is a trademark of the Python Software Foundation. ‘Cell Broadband Engine’ is a trademark of Sony Computer Entertainment Inc. Eclipse is a trademark of Eclipse Foundation, Inc. Unicode is a trademark of Unicode, Inc. occam is a trademark of SGS-Thomson Microelectronics Inc. References [1] N.C.C. Brown. Rain VM: Portable Concurrency through Managing Code. In Peter Welch, Jon Kerridge, and Fred Barnes, editors, Communicating Process Architectures 2006, pages 253–267, September 2006. [2] Fred Barnes. occam-pi: blending the best of CSP and the pi-calculus. http://www.cs.kent.ac.uk/projects/ofa/kroc/, June 2006. [3] University of Kent at Canterbury. Java Communicating Sequential Processes. Available at: http://www.cs.ukc.ac.uk/projects/ofa/jcsp/. [4] Jan F. Broenink, Andr`e W. P. Bakkers, and Gerald H. Hilderink. Communicating Threads for Java. In Barry M. Cook, editor, Proceedings of WoTUG-22: Architectures, Languages and Techniques for Concurrent Systems, pages 243–262, 1999. [5] N.C.C. Brown and P.H. Welch. An Introduction to the Kent C++CSP Library. In J.F. Broenink and G.H. Hilderink, editors, Communicating Process Architectures 2003, pages 139–156, 2003. [6] Jon M. Kerridge. Groovy Parallel! A Return to the Spirit of occam? In Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood, editors, Communicating Process Architectures 2005, pages 13–28, September 2005. [7] Ian R. East. The ’Honeysuckle’ Programming Language: Event and Process. In James Pascoe, Roger Loader, and Vaidy Sunderam, editors, Communicating Process Architectures 2002, pages 285–300, 2002. [8] B. Boehm. Software Engineering Economics. Prentice Hall, 1981. [9] The Eclipse Foundation. Eclipse. http://www.eclipse.org/, June 2006. [10] Tim Sweeney. The Next Mainstream Programming Language: A Game Developer’s Perspective 2006. In ACM SIGPLAN - SIGACT Symposium on Principles of Programming Languages. ACM, 2006. Available at: http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt, June 2006. [11] Stefan Klinger. The Haskell Programmer’s Guide to the IO Monad. Don’t Panic. December 2005. Available at: http://db.ewi.utwente.nl/Publications/PaperStore/db-utwente-0000003696.pdf, June 2006. [12] Fred Barnes and Peter Welch. Prioritised Dynamic Communicating Processes - Part I. In James Pascoe, Roger Loader, and Vaidy Sunderam, editors, Communicating Process Architectures 2002, pages 321–352, 2002. [13] Jan F. Broenink and Dusko S. Jovanovic. On Issues of Constructing an Exception Handling Mechanism for CSP-Based Process-Oriented Concurrent Software. In Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood, editors, Communicating Process Architectures 2005, pages 29–41, 2005.
N. Brown / Rain: A New Concurrent Process-Oriented Programming Language
251
[14] Gerald H. Hilderink. Exception Handling Mechanism in Communicating Threads for Java. In Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood, editors, Communicating Process Architectures 2005, pages 313–330, 2005. [15] Correspondence on the occam-com mailing list, 2nd march 2006. [16] Boost C++ Libraries. http://www.boost.org/, June 2006. [17] Sun Microsystems Inc. Varargs. http://java.sun.com/j2se/1.5.0/docs/guide/language/varargs.html, June 2006. [18] F.R.M.Barnes and P.H.Welch. Prioritised Dynamic Communicating and Mobile Processes. IEE Proceedings-Software, 150(2):121–136, April 2003. [19] Unicode Inc. Unicode Home Page. http://www.unicode.org/, June 2006. [20] Sun Microsystems. Class Character (Java 2 Platform SE 5.0). http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Character.html, June 2006. [21] ONJava. 10 Reasons We Need Java 3.0. http://www.onjava.com/pub/a/onjava/2002/07/31/java3.html, June 2006. [22] Unicode Inc. Unicode FAQ — Programming Issues. http://www.unicode.org/faq/programming. html, June 2006. [23] F.R.M.Barnes and P.H.Welch. Mobile Data, Dynamic Allocation and Zero Aliasing: an occam Experiment. In Communicating Process Architectures. IOS Press, 2001.
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
253
Rain VM: Portable Concurrency through Managing Code Neil BROWN Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NF, England
[email protected] Abstract. A long-running recent trend in computer programming is the growth in popularity of virtual machines. However, few have included good support for concurrency — a natural mechanism in the Rain programming language. This paper details the design and implementation of a secure virtual machine with support for concurrency, which enables portability of concurrent programs. Possible implementation ideas of many-to-many threading models for the virtual machine kernel are discussed, and initial benchmarks are presented. The results show that while the virtual machine is slow for standard computation, it is much quicker at running communication-heavy concurrent code — within an order of magnitude of the same native code. Keywords. Process-oriented programming, Concurrency, Virtual Machine, VM, Rain
Introduction The over-arching trend of computer programming in the last fifteen years has been the growth of interpreted/bytecode-based languages. It had become clear that computers would always be heterogeneous, and that compiling different versions (‘ports’) of programs in languages such as C usually required a tangled mess of compiler directives and wrappers for various native libraries (such as threading, networking, graphics). Using an interpreter or bytecode (grouped here into the term “intermediate language”) meant that the burden of portability could be centralised and placed on the virtual machine (VM) that runs the intermediate language, rather than the developers of the original programs. Java, the .NET family, Perl, Python, Ruby — to name but a few — are languages that have become very widely-used and use virtual machines. The developers of .NET coined the term managed code [1] to describe the role that the virtual machine takes in managing the resources required by the intermediate language program. This captures nicely the advantage of intermediate languages; the virtual machine manages everything for you, removing most of the burden of portability. Support for concurrency is a good example of the heterogeneity of computers. Traditional single-CPU (Central Processing Unit) machines, hyper-threading processors, multicore processors, multi-CPU machines and new novel designs such as the Cell processor [2] can all provide concurrency, usually in a multitude of forms (such as interleaving on the same CPU or actual parallelism using multiple CPUs). Targeting each of these forms in a nativelycompiled language (such as C) requires different code. It often cannot simply be wrapped in a library because the differences between things such as true multi-CPU parallelism and interleaving, or shared memory and non-shared memory are too great. I believe that the best way to achieve portable concurrency is through the use of an intermediate language. Programs based on a process-oriented programming (no shared data, syn-
254
N. Brown / Rain VM: Portable Concurrency Through Managing Code
chronous channel communications, etc) could be stored in a bytecode format. This bytecode could then be interpreted by a virtual machine that is built to take advantage of the concurrency mechanisms in the machine that it is running on. For example, on a single-core singleCPU machine it could use very fast cooperative multitasking (like C++CSP [3]), whereas on a multi-CPU machine it could use threads. More discussion on such choices is provided later in the paper in section 8. Existing virtual machines (such as the Java and .NET VMs) tend to rely on Operating System (OS) threads for concurrency (usually provided to the managed program in a threading model with simplistic communication mechanisms such as semaphores). This allows the VM to take advantage of parallelism on multi-core and multi-CPU machines, but is fairly heavyweight. 32-bit Windows can only handle 2,000 threads at the default stack size (which would use 2 Gigabytes of memory), and still only 13,000 if the stack size is cut to 4kB (which would use the same amount of memory due to page sizes) [4], which is infeasibly small for many inherently-concurrent programs where concurrency is a natural mechanism to use. The only current VM-like system suitable for scalable process-based languages is the Transterpreter, an interpreter for the Transputer bytecode [5]. The Transterpreter is admirably small and concise, and provides portability for the KRoC compiler. There are a number of features that I wanted in a virtual machine, such as C++ integration, security, poisoning, exception handling (all expanded on in this paper) that would have had to be grafted on rather than being part of the design from the outset (never a good idea), therefore the Transterpreter did not suit my needs. The remainder of this paper details the design and implementation of a new concurrencyfocused virtual machine, named Rain VM after the programming language Rain described in [6]. Many of Rain VM’s features are motivated by the design of Rain. 1. Virtual Machine Design Virtual machines can be implemented in a number of ways. The Java virtual machine is entirely stack-based, whereas Parrot (the VM for Perl 6) is register-based. The opposing ideas are discussed in [7] by the designer of the Parrot VM. While the two most-used virtual machines, Java and .NET, use a stack-based architecture, studies have shown that register-based virtual machines can be faster [8]. Based on this research I chose to make Rain VM primarily register-based. Some modern VMs include Just-In-Time (JIT) compiling — the process of translating virtual machine code into native machine code when the former is loaded. Due to the (manpower) resources needed for JIT compiling, this is not considered a likely possibility for this project. Therefore, the focus in this paper is solely on interpreting virtual machine code. Rain includes both process-oriented and functional-like programming, thus the virtual machine must support easy context-switching between processes as well as support for function calls. These two aspects are discussed below. 1.1. Security Security in the context of this virtual machine consists of two main aims: protecting the virtual machine from unintentional bugs in code and protecting the virtual machine from malicious code running on it (often referred to as sandboxing the code). The latter is a particularly ambitious aim. The idea of mobility [9] in process networks yields the possibility of untrusted processes being run on a machine. In this case it is essential to limit their behaviour to prevent the untrusted code attacking the virtual machine. Naturally, security checks will impose a performance (and in some cases, memory) overhead on the virtual machine. While security will be vital for some uses of the virtual machine,
N. Brown / Rain VM: Portable Concurrency Through Managing Code
255
there will be users who will consider such security features to be unnecessary. To this end, it is intended to produce a compiled version of the virtual machine without these checks (where that is possible). 1.2. Concurrency The difference between this virtual machine and most others is that it will be designed for concurrency — this means that there will be many contexts, one for each thread of execution. In the case of a solely register-based machine, this would mean one register block per thread. However, consider the following pseudo-code: int: x,y; x = 3; y = 4; par { x = 2; y = 5; } y = 2 * x;
Assume for the sake of discussion that the above code is compiled un-optimised. Using the register-based virtual machine design, each of the two parts of the par would have their own register block. Putting the initial value of x and y into the register blocks could be done as they were created (before they were put on the run queue), but getting the values back into the parent’s register block after the par is not as easy. The underlying mechanism of the (un-optimised) par is that the parent process will fork off the two parallel parts and then wait on a barrier for them to complete. The proper mechanism for getting values back would be channel communication — which seems somewhat over-the-top for such straight-forward code. Instead a stack could be used; the values of x and y could be on the stack, which could be accessed by the sub-processes. For security, sub-processes could be given a list of valid stack addresses in the parent process that they are permitted to access, to avoid concurrent writing. Further justification for having a stack for special uses alongside the general-use registers is given in the next section. 1.3. Functions The Rain programming language contains functions. Functions that are guaranteed to be non-recursive can be inlined during compilation — however, Rain supports recursive and mutually-recursive functions, which means that this virtual machine must too. Allowing arbitrary-depth recursion (the depth of which cannot be predicted at compile-time) with only registers would require many contortions that can be avoided by using a stack for arguments to be passed on. Function calls also require the return address for a function to be stored. This is usually done by storing it on the stack. An incredibly common technique in remotely breaking into machines is that of buffer overflow, which often works by trying to overflow a stack buffer and over-write the return address on the stack. Theoretically, our type system (described later in section 3) and other measures should prevent such abuse. Security works best in layers, however. Function call return addresses will therefore be maintained on their own stack. Having two stacks in a native program would be considered wasteful and cumbersome. One of the advantages of a virtual machine is that there is greater design flexibility to do such things. The overheads of having an extra stack are minimal — the equivalent of two registers.
256
N. Brown / Rain VM: Portable Concurrency Through Managing Code
1.4. Forked Stacks Consider the following pseudo-code that combines the preceding ideas: int: x,y; x = 4; y = 3; par { x = factorial(x); #Branch A y = factorial(y); #Branch B }
It has already been explained that the two parts of the par have access to the stack of their parent’s process. It has also been decided that function calls use the stack for parameter passing. If the two factorial calls tried to use the same stack (inherited from the parent process) for their parameter passing then there would be a race hazard — they would both be writing to the same stack locations. Therefore the two sub-processes must have their own distinct stacks, yet still be able to access their parent’s stack. Hence the idea of forked stacks. This concept is shown in a diagram below, at the point where both stacks will be at the deepest level of the factorial function (factorial(1)). The 1, 2, 3, 4 progression are the arguments to the factorial function. Return addresses are stored on a different stack as described in the previous section. Return-address stacks are not forked, as there is no need to access the parent process’s stack.
Stacks are random access, with zero being the topmost item on the stack. Note that the storage locations for x and y have different indexes in each branch. Accessing index 5 in branch A is no different to accessing index 2 in terms of the access method; the forking mechanism is transparent for the purposes of stack item access. 1.5. Exceptions The paper on the design of the Rain language [6] explores whether or not to include exceptions. It concludes that exceptions should be featured in the language (in a small way), and hence as the target for the language, Rain VM must also include them. Like functions, exceptions are a mechanism best suited to using a stack. For security reasons, the exception stack will also be separate to the other stacks. Unhandled exceptions cause a process to be terminated. Deciding what to do beyond that is unclear — for now, pessimism prevails, and the entire virtual machine is terminated. Processes cannot catch their (parallel-composed) children’s exceptions. In future this could be changed according to the latest research conclusions.
N. Brown / Rain VM: Portable Concurrency Through Managing Code
257
2. Implementation Notes The virtual machine has been developed using a rigorous unit-testing approach. Mainly this is done test-first. The virtual machine is written in C++; advantages of this are expanded on in section 10. Registers are addressed by one byte, therefore 256 registers can be addressed. The virtual machine will guarantee that any of the 256 registers can be used. As an optimisation however, compilers targeting the virtual machine are encouraged to allocate the lowest-indexed registers first. The virtual machine could initially allocate, say, eight or 16 registers and then increase the size of the register block when higher-indexed registers were accessed. This would allow the memory footprint of short processes (that do not use many registers) to be very small. A process context is a combination of: register block, instruction pointer, data stack, return-address stack, and exception stack. Context-switching in one virtual machine thread (i.e. where no locks are needed) is as simple as changing the current context pointer (and possibly doing a small amount of processing to the run queue). This should mean that contextswitching is as fast as executing any other virtual machine instruction, although this is a combination of the speed of the former as well as the slowness of the latter. This expectation is tested in section 11. 3. Type System This virtual machine is strongly statically typed. This is primarily to mirror the design of Rain. As described in [6], Rain currently contains the following data types: • Boolean values. • 8-,16-,32- and 64-bit integers, signed and unsigned. • 32- and 64-bit floating point numbers (with the possibility of 128-bit or larger in future). • Tuple types of sizes 1-255, containing any mixture of the types in this list1. • List types (implemented as either array or linked list) of any single type in this list1 . • Map types from any single type in this list1 with a total ordering (see below) to any other single type in this list1 . • Channels for communicating any single type in this list1 . • Reading/writing ends of the above channels. • Barriers and buckets. • Functions and processes. The types that have a total ordering are: all numbers, tuples where all types have a total ordering and lists containing a totally ordered type. Maps, functions, processes and communication primitives are not ordered. All types support testing for equality. These types are compositional, to an unlimited depth. Booleans, integers and floating point numbers (which are contained by value in the 64-bit registers) are referred to here as primitive types, and all other types (that store a reference in the 64-bit register) as complex types. Complex types must all support four operations (referred to as type functions): creation, destruction, copying and comparison. Naturally, because the types are compositional to an unlimited depth, the type functions are as well. So the list copying function allocates a new list of the same size as the source list, and then calls the copy function of its inner type for each of the elements in the list. These functions could either have been written into the virtual machine itself, or could have been 1
With the exception that channels, channel ends, buckets and barriers cannot be contained in any other type
258
N. Brown / Rain VM: Portable Concurrency Through Managing Code
written using virtual machine code to be executed by the virtual machine. It was decided, for the sake of speed, to write them into the virtual machine (in C++). When some bytecode is initially loaded, the virtual machine stores the types in a lookup table (which prohibits duplicate keys for the same type), allocating the type its unique key. Type functions for the type are created once, and stored by the same key. Whenever a type function is enacted on a class afterwards, it uses the key for speed — because the types can be of unlimited depth, the exact description is of unlimited length, which could take a long time to compare. 3.1. Type Safety Each register in the virtual machine could have been simply a 64-bit integer. Any other values (32-bit integers, references to lists or maps) would have been converted into integer form and stored. This would have allowed an instruction to treat the same integer as any of those types without restriction. This, however, is bad from a security stand-point; an integer could be treated as a pointer, and thus arbitrary memory locations could be accessed/written to. Therefore type safety was added. Each register in the virtual machine is an instance of data-storage. Every data-storage is a 64-bit value with an additional type identifier. Currently this type identifier is a 32-bit integer that references the lookup table described in the previous section. This means the data-storage is 12 bytes in size. A possible optimisation is to increase this to 16 bytes for better alignment on 64-bit machines (at the cost of memory usage). Every operation on this data-storage is type-checked. Deciding what to do when type-safety is broken is a difficult problem that has not yet been fully examined. Unlike invalid class casts in Java that can be caught by the language, or type errors in languages such as Perl where a conversion is usually attempted, type exceptions in this virtual machine are considered fatal, just as an invalid instruction code is. They are not visible to the virtual machine code — that is, they cannot be detected or caught by the virtual machine code. They are currently treated as unhandled exceptions, and therefore (as mentioned in section 1.5) terminate the virtual machine. 3.2. Decoupling Rain VM is the target compilation platform for Rain. Since Rain will already enforce correct typing at the language level, adding these extra checks may seem superfluous. The checks are born of a desire to decouple the virtual machine from the language; while I do not envision Rain targeting anything but the virtual machine, I hope that the virtual machine may be targeted by other languages. Rain VM has a complete assembly language that the Rain compiler uses; there is no reason why other languages could not target this assembly language. These checks are therefore for other languages and also for untrusted code. A virtual machine would also lose all its safety advantages over native code without these checks. 4. Bytecode Design The virtual machine instructions are entirely in 32-bit increments — usually one 32-bit word per instruction. This makes fetching the instruction straightforward. The majority of the instructions are in the format: 8-bit instruction opcode, 8-bit instruction sub-opcode, 8-bit destination register, 8-bit source register. This is not fixed however, and is merely the convention that is usually followed. A possible optimisation for the future is to fetch 64-bits at a time (given that most machines in the future will be 64-bit, this would be more efficient) and then decode it into instructions.
N. Brown / Rain VM: Portable Concurrency Through Managing Code
259
The below diagram depicts the output bytecode instruction. The bits on the left are the most significant. The first (most significant) byte is the (provisional) output instruction opcode. The next byte is unused, so it must be set to zero. The third byte is the index of the register that holds the channel writing-end. The fourth (least significant) byte holds the data to be sent down the channel.
5. Assembly Language The virtual machine has its own assembly language and assembler. This should make it easy for other programming languages to target the virtual machine. The syntax is of the form move r0,r2 — that instruction moves the contents of register 2 into register 0. A full documentation of assembly syntax, bytecode instruction and semantics for all instructions is currently being worked on, and should be made available when the virtual machine is. 6. Terminology Terms involved in concurrent programming often have ambiguous meaning. In this paper they are used as follows. Parallelism refers to two physical devices acting in parallel (be it multiple CPUs or multiple cores on the same CPU), whereas concurrency encompasses parallelism and other techniques such as interleaving that convey the effect of two processes acting at the same time. A thread is used here to refer here to concurrency at the Operating System (OS) level — this may be termed threads or (confusingly) ‘processes’ (used in quotes to differentiate this use from mine) by the OS. To achieve parallelism, multiple threads must be used. A process refers to a block of code and an accompanying program state. The virtual machine contains a set of active processes to be run concurrently. A process in Rain VM is more fine-grained than a process in Rain; a par block in Rain is a process in Rain VM. 7. Communication This section details a few issues which are common to all of the approaches to implementing concurrency given in section 8. Consider the OS threads T0 and T1 . T0 has two processes it is currently inter-leaving between: PA and PB . T1 is inter-leaving between Pα and Pβ . PA and Pα are connected by a channel, as are PB and Pβ .
260
N. Brown / Rain VM: Portable Concurrency Through Managing Code
PA makes a call to write on its channel. Its thread T0 cannot block on this write, as T0 must instead switch to PB while the communication is pending. That is, the communication must take place asynchronously. There are two methods of making asynchronous calls: polling and interrupts/callbacks. Inter-thread communications via interrupts or callbacks is a tricky area that is not explored here — as the thread would be interrupted asynchronously, it could be in an unknown/partially-invalid state when it was interrupted, such as updating its run queue. Polling would require T0 to check back periodically for completion of the communication by T1 . This could be done by having shared memory between the two threads protected by a mutex (or similar), but if T0 holds the mutex when T1 goes to check the memory then T1 must either block (which this is intended to prevent) or must come back again later. The latter could lead to an indefinite wait if T0 always gets the mutex just before T1 arrives. Therefore polling would have to be done through some sort of message-passing system. Each thread could have its own inbox of messages (from which it could receive messages from any virtual machine OS thread) for communication notifications. To implement this, a Message-Passing Interface (MPI) [10] package could have been used. Using buffering, MPI systems can send messages asynchronously (without blocking), which would solve the problem. However, MPI cannot guarantee this behaviour. Alternative mechanisms will be explored in future; TCP sockets are one possibility, but individual operating systems may offer their own solution. 7.1. Mobility There are complications with channel communications between concurrent processes in the virtual machine, related to the implicit mobility of channel ends. Consider a process PA that allocates a one-to-one channel C. PA spawns PB and PC in parallel, passing them the two ends of the channel. PB spawns PD and PE , giving the latter its end of C. When PE and PC come to use the channel, at least one of them will need to know where the other end is. This is not a matter of the address in memory (that will be constant), but whether the two processes are in the same thread (and can have a very quick communication) or in different threads (that will require a different communication mechanism). This is compounded in some of the concurrency suggestions below, where processes can move from being in the same thread to being in separate threads during their execution. KRoC.net solves a similar problem [11] by coordinating the channel ends via an administration node for the channel. That is, when a channel end is moved, it contacts the administration node to register its new location and to find the current position of the other end of the channel. The implication of this and similar solutions is that moving a process to a new thread involves a larger overhead than simply manipulating the run queues. This cost should be borne in mind when considering the options presented in the next section. 8. Concurrency in the Virtual Machine 8.1. Overview In section , I posited that virtual machines would be the best way to take advantage of the variety of mechanisms for parallelism available on heterogeneous systems. Implementing this virtual machine alongside a full programming language (detailed in [6]) has been a large under-taking, so currently there is only a simple form of concurrency present in the virtual machine; a single-threaded interleaving through all the available processes. This section discusses the various implementation ideas for a multi-threaded virtual machine and attendant problems for the future.
N. Brown / Rain VM: Portable Concurrency Through Managing Code
261
8.2. Context-Switching There are a variety of methods for choosing when to context-switch between processes. Concurrent systems will invariably context-switch when one process makes a blocking call. OS threads usually switch asynchronously when a timer interrupt occurs. Programs are usually able to suggest or force a context switch through some form of yield() call. occam-π [12] can compile in a possible context-switch at the end of loops. In a virtual machine, very finegrained control is possible — for example, a context-switch could be done after every 25 executed instructions. In channel-based process-oriented systems communications are usually frequent and thus context-switches occur quite often (because the channel reads and writes will block roughly one attempt in two2 ). However, if a process is doing some computation it may not be desirable to let it monopolise the processing resources. Hence, being able to interrupt the virtual machine after an arbitrary number of instructions to perform a context-switch is a useful option. 8.3. Interleaving Interleaving is an incredibly simple mechanism for implementing concurrency in the virtual machine. One thread executes virtual machine instructions for the current context until a switch is made. The current context pointer is changed to point at the new context, and those instructions are then executed until a further context switch. The disadvantage is that there is no parallelism involved, but it can be combined with other concurrent mechanisms to form a many-to-many approach (multiple parallel threads interleaving various concurrent processes). 8.4. Thread-per-process Operating systems almost always offer parallelism through the mechanism of threading. This may or may not be distinct from running multiple OS ‘processes’ — Linux has historically favoured the use of fork() to create new ‘processes’, whereas Windows favours threads inside a single ‘process’. This is therefore an easy semi-portable way for our virtual machine to utilise true parallelism on multi-processor and multi-core systems. One way to use threads would be to allocate one thread for each process. This would make all inter-process communications identical (each thread could simply block when waiting for a channel communication), which would make the implementation relatively simple. It would be wasteful however in terms of overhead, and would mean that the virtual machine had gained no performance benefits over the clunkiness of threads and the restrictive limits on their scalability described in section . 8.5. Strictly Hierarchical Many-to-many Approach One way to combine the OS thread and interleaving approaches would be a hierarchy. Each OS thread would be in charge of interleaving a set of processes. Sub-processes spawned would remain in the same OS thread, unless the set of processes grew beyond an arbitrary limit, at which point another OS thread would be created to handle that process and its subprocesses. The diagram below shows this approach — the arrows indicate a parental relationship (e.g. A is the parent of D): 2 With two parties involved in a communication, the first one to attempt to synchronise will block, but the second will not as the first is already waiting.
262
N. Brown / Rain VM: Portable Concurrency Through Managing Code
This approach mirrors the hierarchical nature of process-oriented programs. Most communications between processes will occur between sibling processes spawned by the same parent — these communications would be quick ones between interleaved processes in the same OS thread. The main drawback is its inflexibility. If the program has a worker/manager pattern, then all the workers may end up in the same OS thread — clearly undesirable. The virtual machine could be amended to accept hints from the programmer as to when to split child processes into separate threads, but ideally allocation would be done without such hints. 8.6. Pooling In threading, pools of threads are a relatively common technique. To avoid the cost of the creation and destruction of threads on demand, a pool of threads is created; if no work is available then they sit idle. In this case, the work would be virtual machine processes to execute. The pool in what I term ‘peered pooling’ would be held in a shared memory area, guarded by a mutex. By contrast, the idea of ‘dictated pooling’ involves an active manager thread handling the pool of processes. These ideas are explored below. 8.6.1. Peered Pooling The threads could work in one of two ways: either a thread could take a single process at a time and return it to the pool when the process blocks, or each thread could hold a number of processes to interleave, and only pick up more when all its current processes are blocked. The first approach would make channel communications difficult; if a process is moving between threads then the message-passing system described in section 7 would lack a permanent or even semi-permanent destination for a process. The communication would have to change to being stored in a common shared memory area (protected by a mutex). This would make the shared memory area very heavily used by all the threads, which in turn would slow all the threads down as only one could access the area at once. The second approach would involve a thread taking on a new process each time that all of its currently managed processes ones blocked. This would have to be cleverly limited, to stop the thread taking on too many processes because its current ones happen to all block simultaneously. With either approach, new processes would have to be placed into the shared area once created, awaiting a thread able to execute them. Ideally, if no thread was near-immediately available, a new one would be created. This would require some form of central co-ordination amongst the threads, which leads to the dictated pooling idea. 8.6.2. Dictated Pooling Dictated pooling extends the peered pooling idea by introducing a central coordinator thread. Rather than having a shared memory area for the shared information, the coordinator thread keeps all such information. Information is fed in via its message passing system, and sent back out in the same way. Processes that block could be sent back to the coordinator. Threads with no more processes to run could request more from the coordinator. The peered pooling and dictated pooling are similar ideas. Peered pooling appears simpler (although very inefficient), but dictated pooling more closely fits the channel-based com-
N. Brown / Rain VM: Portable Concurrency Through Managing Code
263
munication that is used by process-oriented systems. The best way to judge performance trade-offs would be a trial implementation rather than extensive on-paper design considerations. 8.7. Linux 2.6 During the development of the Linux kernel 2.6, the old O(n) 2.4 scheduler was replaced with a new O(1) scheduler [13]. The Linux kernel scheduling problem is very similar to our VM kernel scheduling problem, so the Linux kernel methodology is useful here for reference. The Linux kernel is broadly similar to the ‘Strictly Hierarchical Many-to-many Approach’ described above in section 8.5. Each CPU is in charge of maintaining a run-queue for itself that is protected by a lock. Forked processes are kept on the same CPU initially. In order to load-balance, CPUs with a light load find a busy CPU and lock its run-queue in order to move some of the busy CPU’s processes onto the lightly-loaded CPU. 8.8. Threads and CPUs All of the above ideas are predicated around arranging the VM processes into OS threads. The number of threads to be used should be determined by the number of CPUs/cores. Running only two threads on a four-CPU machine would not fully utilise the parallelism available. Running ten threads on a single-core single-CPU machine would be very inefficient; channel communications would be done between threads rather than using the much quicker mechanisms available within the same thread. The ideal equilibrium would appear to be a thread per core/CPU. However, the OS will not necessarily schedule each thread on a different CPU, so the parallelism may still be underutilised. The optimum number of threads is likely to be slightly higher than the number of cores/CPUs available, except in the case of a single-CPU (single-core) machine, where one thread would be optimal. Experimentation (see the next section) will help to determine the optimal amount. 8.9. Outcome Various options have been proposed above for the concurrency mechanism for the virtual machine. Rather than making an early decision, I intend to implement the most promising options and compare their performance. It may be the case that multiple systems are retained in the virtual machine — the flexibility allowing good performance across multiple different systems. 8.10. Implementation Transparency Whichever mechanism is chosen on a particular OS, it will be transparent to the programmer. All implementations will have the same semantics (e.g. synchronised communication). While the exact scheduling choices may differ, the advantage of process-oriented programming is that these choices should not affect the overall behaviour of the program. 9. Debugging While programming process-oriented systems, there inevitably comes a time when a program runs and at some point exits with the message “DEADLOCK”. The instinctive reaction is to want to know why — that is, what every process was doing at the time, and which was the last process to block (and hence cause the deadlock). An advantage of using a virtual machine is that this information should be easy to provide.
264
N. Brown / Rain VM: Portable Concurrency Through Managing Code
It is not just post-mortem inspection that a virtual machine can facilitate; stepping though a program with a debugger should be easier in a virtual machine than in native code. As with many features, there has not yet been time to implement a debugger, but it should be possible. The challenge will be to make the debugger work with some of the threading that is intended. There will inevitably be a performance hit for such a feature, but it would only be used by a programmer during program development, and not when a program is actually deployed. 10. C++ Interface Inevitably there will be a reason for programmers of interpreted languages to interface with C/C++. This can be for a variety of reasons, such as needing to access a C/C++ API for which the particular language has no libraries, or for writing high-performance code. Accordingly, most languages offer a C interface. Java has the Java Native Interface [14]. Many other languages are served by SWIG [15]. The other advantage of such an interface is that it gives programmers a measure of comfort when trying a new language. As Advanced Perl Programming [16] notes: “The ability of languages such as Perl, Visual Basic, Python, and Tcl to integrate well with C accords them the status of a serious development language, in contrast to awk and early versions of BASIC, which were seldom used for production applications.” As described in [6], these practical measures will encourage the transition from C++ to Rain. The virtual machine is written in C++. Concepts such as data-storage (described in section 3.1), lists, channels and maps are all C++ objects. This makes exposing a C++ interface to them very easy. Its channel and process concepts can be integrated with C++CSP. This tight integration should be be very useful to C++ programmers wishing to use Rain (or any other languages that target Rain VM) with existing C++ code. 11. Benchmarks The main aims for this virtual machine were support for concurrency, portability and security. Time and time again however the barrier to adoption of a new programming tool amongst programmers has been speed. Java faced a barrage of criticisms that it was too slow for years after its initial release. I hope that this ‘speed is all-important’ mentality will one day be left behind but nevertheless in this section I present some initial performance benchmarks. The benchmark timings are listed as real-time, not system time; each results is the average of 50 runs and thus any interruptions on the machine are expected to average out to allow for a fair comparison. The benchmarks were carried out on an AMD Athlon 64 3000+ (1.8 Ghz raw clock speed) CPU with dual-channel DDR memory running Gentoo GNU/Linux. Both Rain VM and the GCC-compiled tests were using native 64-bit code, but KRoC (the occam-π compiler [17]) was compiled for 32-bit as some of its dependencies do not currently support compilation for 64-bit. 11.1. Instruction Speed Test The first benchmark is intended to measure the rough speed of the virtual machine through timing a simple counting loop. The C++ version is simply: for (usign64 i = 0; i < 1000000; i++) {}
and the occam-π version is the equivalent code. The Rain assembly version consists of two instructions (an add and a conditional jump) to achieve the same effect. The timing for each loop iteration (averaged out over fifty separate runs of a million loops) is as follows:
N. Brown / Rain VM: Portable Concurrency Through Managing Code
265
Language/Compiler
Time per loop iteration (nanoseconds) occam-π/KRoC 1.4.0 11 C++/GCC 3.4.5 1 Rain VM 404 The results show that the virtual machine is roughly 400 times slower than the nativecompiled code. This is a slightly disappointing result. 11.2. Context-switching Test The next benchmark tests (when compared to the previous benchmark) test the contextswitching speed. The C++CSP code ran x copies of this code: for (usign64 i = 0; i < 1000000; i++) {yield();}
in parallel. The Rain assembly version had three instructions (a yield, an add and a conditional jump), with x copies of that code running in parallel. The occam-π version was the equivalent of those two. The results for x = 2 and x = 10 are given below. Language/Compiler
Time per loop iteration Time per loop iteration x = 2 (nanoseconds) x = 10 (nanoseconds) occam-π/KRoC 1.4.0 26 128 C++CSP/GCC 3.4.5 534 2645 Rain VM 1063 5126
The times given above ‘per loop iteration’ are for the time for all x iterations. That is, given the time t for 2 copies of the loop running 106 times in parallel, the times recorded above are t/106 , not t/(2 × 106 ). These times are perhaps more useful when compared amongst the same system. Such comparisons are given below. In an ideal (zero-overhead for concurrency) world, the ratios would be 2 : 1 and 5 : 1. (x = 2):Sequential (x = 10):(x = 2) Ratio, 2 d.p. Ratio, 2 d.p. occam-π/KRoC 1.4.0 2.36 : 1 4.92 : 1 C++CSP/GCC 3.4.5 534.00 : 1 4.95 : 1 Rain VM 2.63 : 1 4.82 : 1
Language/Compiler
The impact of introducing the context-switches to the C++(CSP) code is immediately apparent. Running two yielding copies in parallel is approximately 250 times worse than the ideal ratio. By comparison, Rain offers a reduction not far from the optimal 2 : 1, as does KRoC. All the systems scale approximately linearly in the number of copies being run in parallel. The performances being slightly better than linear is thought to be related to the cache. A naive bit of arithmetic, subtracting the calculation times in section 11.1 from the times with context switching for x = 2 in the table above (divided by two), gives a rough idea of the time that context-switching takes: Language/Compiler
Rough estimate of context-switch time (nanoseconds) occam-π/KRoC 1.4.0 2 C++CSP/GCC 3.4.5 266 Rain VM 128
266
N. Brown / Rain VM: Portable Concurrency Through Managing Code
This indicates that the virtual machine can switch contexts faster than the nativecompiled C++CSP code, which is very encouraging. As ever, KRoC performs incredibly well. 11.3. Commstime Test The virtual machine is intended for concurrency, so a benchmark with concurrent communicating processes is more apt. The quasi-standard CommsTime [18] benchmark is chosen. This benchmark consists of a process ring containing a prefix process, a successor and a delta process also connected to a recorder. In essence, CommsTime is a concurrent version of the counting loop used in the above benchmark. C++CSP, KRoC and the Rain VM all use interleaving concurrency in the same OS thread, so the comparison is fair (whereas JCSP uses OS thread-level concurrency, so it is not included). The timings are given below. Language/Compiler
Time per CommsTime iteration (nanoseconds) occam-π/KRoC 1.4.0 272 C++CSP/GCC 3.4.5 1327 Rain VM 1991 The results show that the virtual machine is only around fifty percent slower than the native-compiled C++CSP code at performing CommsTime. This indicates that the performance difference between native-code and the virtual machine on communication-heavy code (as CSP code is wont to be) will be less than an order of magnitude. KRoC has the best time again, but is less than an order of magnitude different to Rain VM. 12. Conclusions and Future Work This paper has detailed the design of a new virtual machine designed specifically for highlyconcurrent process-oriented programs. The virtual machine is intended to run programs for the Rain programming language [6], but can be targeted by any language. There is a full assembly language for programming the VM. Much of the virtual machine has been implemented and tested. The VM contains type safety to remain stable if it encounters defective code and dynamic register allocation will soon be added to allow small processes to have a very small memory footprint. A C++ interface is planned, to allow interaction with existing C++ code via C++CSP. A debugger is also intended for implementation to aid programming in Rain and other languages that target the VM. A detailed discussion of threading models and associated problems was provided in sections 7 and 8. The process-oriented programming style allows concurrency to be used naturally and easily but in order to provide this, tools such as this VM must be built on top of existing unwieldy and unsafe concurrent mechanisms offered by modern operating systems. Experimentation with implementing the various suggestions has been proposed — the outcome of which will guide the final choice of threading model. The benchmarks show that the virtual machine is two orders of magnitude slower than native code for executing standard instructions. However, the concurrent CommsTime benchmark was only twice as slow as native C++CSP code, showing that for concurrent communication-oriented code the virtual machine is within an order of magnitude of native C++CSP and KRoC. The primary focus of future work will naturally be finishing and optimising the virtual machine. This includes both implementing the remainder of the instruction set and also ex-
N. Brown / Rain VM: Portable Concurrency Through Managing Code
267
perimenting with and implementing the threading model. Process priorities are an obvious candidate feature for inclusion when implementing the threading model. Trademarks Java is a trademark of Sun Microsystems, Inc. Windows is a registered trademark of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. Python is a trademark of the Python Software Foundation. ‘Cell Broadband Engine’ is a trademark of Sony Computer Entertainment Inc. occam is a trademark of SGS-Thomson Microelectronics Inc. Gentoo is a trademark of Gentoo Foundation, Inc. ‘AMD Athlon’ is a trademark of Advanced Micro Devices, Inc. References [1] Brad Abrams (Microsoft). What is managed code? http://blogs.msdn.com/brada/archive/2004/01/09/48925.aspx, June 2006. [2] IBM. Cell Broadband Engine Architecture. http://domino.research.ibm.com/comm/ research projects.nsf/pages/cellcompiler.cell.html, June 2006. [3] N.C.C. Brown and P.H. Welch. An Introduction to the Kent C++CSP Library. In J.F. Broenink and G.H. Hilderink, editors, Communicating Process Architectures 2003, pages 139–156, 2003. [4] Raymond Chen (Microsoft). Does Windows have a limit of 2000 threads per process? http://blogs.msdn.com/oldnewthing/archive/2005/07/29/444912.aspx, June 2006. [5] Christian Jacobson and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Ian R. East, David Duce, Mark Green, Jeremy M. R. Martin, and Peter H. Welch, editors, Communicating Process Architectures 2004, pages 99–106, 2004. [6] N.C.C. Brown. Rain: A New Concurrent Process-Oriented Programming Language. In Peter Welch, Jon Kerridge, and Fred Barnes, editors, Communicating Process Architectures 2006, pages 237–251, September 2006. [7] Dan Sugalski. Registers vs stacks for interpreter design. http://www.sidhe.org/~dan/blog/archives/000189.html, June 2006. [8] Andrew Beatty Yunhe Shi, David Gregg and M. Anton Ertl. Virtual Machine Showdown: Stack versus Registers. In ACM/SIGPLAN Conference of Virtual Execution Environments (VEE 05), pages 153–163, June 2005. [9] F.R.M.Barnes and P.H.Welch. Prioritised Dynamic Communicating and Mobile Processes. IEE Proceedings-Software, 150(2):121–136, April 2003. [10] A. Skjellum W. Gropp, E. Lusk. Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, 1994. [11] Mario Schweigler. Adding Mobility to Networked Channel-Types. In Ian R. East, David Duce, Mark Green, Jeremy M. R. Martin, and Peter H. Welch, editors, Communicating Process Architectures 2004, pages 107–126, 2004. [12] Fred Barnes. occam-pi: blending the best of CSP and the pi-calculus. http://www.cs.kent.ac.uk/projects/ofa/kroc/, June 2006. [13] Josh Aas (Silicon Graphics Inc). Understanding the Linux 2.6.8.1 CPU Scheduler. http://josh.trancesoftware.com/linux/linux cpu scheduler.pdf, June 2006. [14] Sun Microsystems Inc. Java Native Interface (JNI). http://java.sun.com/j2se/1.5.0/docs/guide/jni/, June 2006. [15] Simplified Wrapper and Interface Generator (SWIG). http://www.swig.org/, June 2006. [16] Sriram Srinivasan. Advanced Perl Programming. O’Reilly, 1997. [17] University of Kent at Canterbury. Kent Retargetable occam Compiler. Available at: http://www.cs.ukc.ac.uk/projects/ofa/kroc/. [18] Roger M.A. Peel. A Reconfigurable Host Interconnection Scheme for Occam-Based Field Programmable Gate Arrays. In Alan G. Chalmers, Henk Muller, and Majid Mirmehdi, editors, Communicating Process Architectures 2001, volume 59 of Concurrent Systems Engineering, pages 179–192, IOS Press, Amsterdam, The Netherlands, September 2001. IOS Press.
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
269
Native Code Generation Using the Transterpreter Christian L. JACOBSEN, Damian J. DIMMICH and Matthew C. JADUD Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NZ, England {clj3 , djd20 , mcj4} @kent.ac.uk Abstract. We are interested in languages that provide powerful abstractions for concurrency and parallelism that execute everywhere, efficiently. Currently, the existing runtime environments for the occam-π programming language provide either one of these features (portability) or some semblance of the other (performance). We believe that both can be achieved through the careful generation of C from occam-π, and demonstrate that this is possible using the Transterpreter, a portable interpreter for occam-π, as our starting point. Keywords. Transterpreter, native code, GCC, occam-π
Introduction The Transterpreter [1] is a virtual machine for CSP [2] and Pi-calculus [3] based languages. Intended as a platform for concurrent language design and research, it also fully supports the execution of programs written in the occam-π [4] language. To support our research goals, the Transterpreter was designed to be extensible and portable. This does mean that it is unlikely to be able to achieve the level of performance that could be obtained by compiling directly to native code. However, what the Transterpreter lacks in performance, it possesses in portability, and has enabled the execution of occam-π on platforms as diverse as the Mac OS X (PowerPC) and the LEGO Mindstorms (H8/300) [5]. Porting the Transterpreter to a POSIX compliant OS, such as Mac OS X, is no harder than recompiling 4000 lines of ANSI C. Where the platform is more specialised, such as the LEGO Mindstorms, a new wrapper must be written to interface with the platform appropriately. Such a wrapper can be as short as 100 lines of code. Techniques to improve the performance of the Transterpreter are being constantly evaluated. Such performance optimisations are however only implemented where they do not sacrifice portability. As an example, Just-In-Time (JIT) compilation may not be practical on all platforms, and must therefore be implemented in a way that does not make the virtual machine unportable. Additionally, recent ports of the Transterpreter to the Tmote Sky [6], a wireless sensor network platform based on the Texas Instruments MSP430 [7], and to the multi-core Cell Broadband Engine [8,9], have identified platforms with specialised needs on which interpretation provides a suboptimal solution. On small battery powered devices, power conservation is likely to be a major concern and the overhead of interpretation may therefore not be acceptable. The Cell Broadband Engine, on the other hand, contains specialised vector processing units, capable of performing high performance vector operations on 128-bit wide data. These units are also capable of running scalar code, and therefore the Transterpreter, but they do so with a large performance penalty. In order to effectively use
270
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
occam-π on the Cell Broadband Engine, on wireless sensor networks and in a host of other specialised applications, it seems necessary to be able to generate efficient native code. Given the diverse run-time environments we wish to target, the goal becomes not just the generation of native code, but to realise a code-generation framework which can, without undue effort, target a wide variety of architectures. Our ultimate goal is therefore to create a native code compiler which is as portable as the Transterpreter, and yet generates executables with performance comparable to the existing native code compilers. In this paper we will therefore explore the steps necessary for generating efficient native code, in a retargetable manner. 1. An Overview of occam Environments A number of occam compilers and runtimes exist which allow occam programs to be interpreted or executed natively. These environments vary in terms of their portability and performance, as well as the assistance they can offer the developer in implementing sound concurrency. Each environment presents its own challenges as a starting point for developing an efficient, portable code generator for occam inspired languages. 1.1. Existing Compilers The most prominent occam compiler currently available is the “Kent Retargetable occam-π Compiler”, KRoC [10,11], which is being developed at the University of Kent, originally as part of the occam-for-all project [12]. KRoC compiles both occam2.1 and the new dynamic occam-π language [4] to native code using the custom tranx86 [13] code generator. tranx86 generates code for the IA-32 architecture; code-generators for PowerPC, MIPS and SPARC targets are in development. SPOC, the “Southampton Portable occam Compiler” [14,15], implements the occam2 and occam2.1 languages [16] by compiling them to ANSI C code. This allows SPOC to generate code for a wide variety of platforms, even though it has not been under active development for some time. Other occam compilers exist, notably the occam1 [17] compiler in the Amsterdam Compiler Kit [18,19,20], which can generate native code for the IA-32 and a number of more archaic architectures. occam compilers which do not generate code for modern architectures are mostly of historical interest, and include the original Inmos compilers, written in occam, and later rewritten in C. occam has also found use in compiler courses, as is the case with the μoccam dialect developed and used at the University of Edinburgh [21]. A number of new occam-inspired compilers are currently being developed. 42 [22], is a small extensible compiler aimed at concurrent language design and development. It implements a small CSP-based language as the foundations for future research. 42 uses the Transterpreter as its underlying runtime, and has supported the work presented in this paper, as the host for the C code-generator. KRoC has approached the point where it is becoming extremely difficult to extend sensibly and the NoCC compiler [23] is being developed at the University of Kent to replace it. Additionally development of the Grid-occam compiler has been reported in [24], as an occam compiler for distributed computing on the .NET platform. 1.2. Towards Native Code In order to develop a portable and efficient code generator for occam-π programs we have explored the most desirable features of the existing environments described above. This has made it possible to provide an approach which combines the best features from each of the existing implementations.
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
271
The virtual machine solutions are generally the most portable, and are often implemented in languages which compile on a wide variety of platforms. Virtual machines, where they have been designed to be cleanly separable functional units, can be viewed as a set of interrelated components for implementing and executing a language. The Transterpreter’s scheduler is written entirely in ANSI C, as a set of reusable functions. These scheduler functions will be utilised unmodified in our native code generation approach. As we are investigating fast and portable native code generation, we will not require any other direct support from the Transterpreter. SPOC compiles to ANSI C code, which is further translated to native code by GCC or some other C compiler. This achieves good performance, especially for sequential code, and provides good portability. The performance of scheduling code is affected by SPOC having to work within the constraints of the C programming language, and it cannot provide the same performance as KRoC. KRoC, and its successor NoCC, make use of custom native code generators capable of generating very efficient scheduler code. While they do optimise sequential code, our testing indicates that they do not achieve the same performance as SPOC, which has most of GCC’s optimisations at its disposal. Our intent is to attempt to combine the performance of KRoC’s scheduling code with the sequential performance and portability of SPOC. To generate efficient native code, we look to the approach used in the Amsterdam Compiler Kit, which eliminates the need to implement native code generators for each new language front-end. This is done by using a common intermediary language, which has a persistent form that a language front-end can store to disk. Back-ends can then be invoked on the (architecture independent) intermediary code, in order to generate native code. Thus, effort invested in the code generator will automatically benefit all language front-ends [18]. Unfortunately GCC does not allow language front-ends to be decoupled from back-ends in this manner, as its intermediary language can only be expressed in-memory. Other compiler frameworks provide intermediary formats which are easier to target than GCC, but they generally do not provide code generators for nearly as many platforms. We would ideally write a new front-end for the GCC compiler, enabling the generation of efficient native occam-π code in a portable manner. This is a large undertaking, and we instead settle for implementing an alternative translation through C. We can then use this to explore the performance that may ultimately be gained from employing the GCC toolchain as our back-end. 2. Using C as an Intermediary Language for occam C is a useful intermediary language, popular because of its wide availability and portability. Many languages, including the Glasgow Haskell Compiler (GHC) [25], nesC [26] (an event-based language for embedded systems programming) and Mercury [27] (which employs techniques similar to those described in this paper) are compiled first to C, and then to native code using GCC. Using C as an intermediary language can present some specific challenges however, as C was not specifically designed to specifically as an intermediary language. For example, GHC’s “Evil Mangler” [28], must modify the assembly instructions generated from C by the GCC compiler, in order to let GHC manage its own stack. In order to translate an occam-π program into C, we must therefore ensure that the C language allows us to fully express the semantics of an occam-π program. 2.1. occam-π Particulars occam-π uses co-operative scheduling, implemented by well-defined scheduling points in a program. The most common scheduling points are channel communications, which are ex-
272
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
plicit synchronisation points for occam-π processes. Channel communication is effectively a rendezvous between two processes, where both must have engaged in and completed the communication, before either is allowed to continue. The first process to arrive at a channel communication must therefore block and wait for the process communicating on the other end to arrive. Our occam-π processes will all be multiplexed within a single operating system context, and the execution of the first process must therefore be suspended, and an available process must be scheduled in its stead. This will allow the program to progress to the point where the second process arrives to complete the communication. An occam-π program must therefore be able to suspend execution of a process at a rescheduling point, and resume execution of some other process at an arbitrary rescheduling point. This is not a natural style of programming to employ in C, which therefore has no explicit support for suspending and resuming execution at well-defined, but arbitrarily paired points in a program. 2.2. Translating to C When using C as an intermediary language we do not have direct access to the executing program’s instruction pointer, and are therefore unable to use it to perform unpredictable jumps between points in a program. We must also honour the C stack, which makes jumping between functions difficult. ANSI C defines goto and labels, but these are not particularly useful, as a goto must reference a named label. This means that a goto can not jump to an arbitrary location in a program, but only to the specific label it references. Furthermore, gotos are not allowed to cross a function boundary, as this would most likely invalidate the C stack. The pure ANSI C goto is therefore not expressive enough to allow the implementation of the transfer of control needed to implement an occam-π scheduler. While it is possible to use inline assembly constructs to perform arbitrary jumping within a program, the approach taken by CCSP [29], we do not consider this as a feasible solution, as we are extremely attentive to our overall goal of ensuring easy portability. Allowing the use of inline assembly instructions would require that these be re-written for each target architecture. If this assembly becomes non-trivial, we have failed in creating an easily portable native code occam-π compiler. Use of the setjmp/longjmp macros to facilitate transfer of control within a program is also problematic. It is possible to find examples of coroutines implemented using setjmp/longjmp. However these rely specific behaviours of the macros, which are not well defined. Furthermore, as noted in [30], the macros are notoriously difficult to implement, and it is unwise to make generalised assumptions about them, making the exotic use of setjmp/longjmp likely to be non-portable. 2.3. Using ‘switch’ Statements The SPOC compiler has already demonstrated that it is possible to translate occam2 and occam2.1 into ANSI C, and compile it using GCC or some other ANSI C compiler. SPOC will translate occam constructs directly to C where possible, so that occam SEQs and WHILEs are translated into for or while loops. SPOC must also deal with the scheduling of occam processes and does this by making heavy use of the C switch statement. Therefore, the job of the SPOC scheduler is to obtain processes from the scheduling queue or channel queues, and continue its execution by selecting the correct branch of a switch statement. The scheduler dispatches execution to a process by first obtaining a pointer to the C function corresponding to it, as well as a pointer to its environment, all wrapped up in the process descriptor taken off the run-queue. The environment contains the IP variable, which holds the value of the branch in the function’s switch statement that must be executed next.
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
273
When the function is entered (see listing 1), control will be passed to the appropriate branch (case) in the switch statement. As each branch in the switch statement contains one atomic sequence of occam code, control does not need to be transferred until execution completes the statement immediately before the next branch (the OUTPUT1 macro in listing 1). The arguments to the OUTPUT1 macro contain the ‘label’ of the next branch to execute, in this case ‘1’. The arguments also specify the channel used for communication and the memory location containing the item being communicated. When the OUTPUT1 macro needs to schedule another process, a C return statement is executed, returning control to the scheduler, which is then able to select the next process from the run-queue and resume its execution. The process shown in listing 1 will eventually be placed back on the run-queue, and subsequently executed by the scheduler. When this happens, execution will resume from case 1. switch(Env−>IP) { case 0: Env−>Temp = 97; OUTPUT1(FP−>Chan51, &Env−>Temp, 1); case 1: ... /∗ More cases may follow ∗/ } Listing 1. Part of a simple SPOC process
2.4. Labels as Values and Computed ‘goto’s Another approach which enables transfer of control between arbitrary scheduling points is to use the GCC ‘labels as values’ extension. The extension is primarily provided by GCC in order to facilitate the writing of threaded1 interpreters. Threaded interpreters do away with the traditional fetch-execute loop, and instead ‘compile’ bytecode into an array of addresses corresponding to the start of an instruction. At the end of the C code implementing a particular instruction, a jump to the next instruction is performed by using a ‘computed goto’: goto ∗∗(pc++); or similar. This can improve the performance of an interpreter over a pure switch based one, as the fetch-execute loop is eliminated. Execution speeds can be further improved weaving the instruction fetch into the code for each instruction, to help the C optimiser fill load and branch delay slots [31,32]. The remainder of this paper will discuss a method for generating C code from occam-π sources using ‘labels as values’ and ‘computed goto’. 3. Using the Transterpreter as a Runtime for Native C This section describes the mechanisms used to transform an occam-π program into a C program that can be compiled into native code by a compiler that supports ‘labels as values’ and ‘computed goto’. Both the C compiler found in the GNU Compiler Collection (gcc), and the Intel Compiler (icc) provide these extensions. Unmodified scheduler functions from the Transterpreter are used to provide the scheduling functionality required for the generated C programs. Because the scheduler functionality is provided as a set of C functions, the C stack must be left intact by the occam-π program. It is therefore not possible to jump across function boundaries using goto, and the entire occam-π program must be compiled as a single C function. The translations described in this paper are used by the 42 compiler to turn occam-πlike programs into C. It would also be possible to translate the Extended Transputer assembly 1 threading in this context does not relate to multiple execution contexts, i.e. threads, but to a dispatch mechanism used in interpreters.
274
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
Code (ETC) [33] generated by the occ21 compiler in this manner, although such a translator has not been implemented at the time of writing. A translation from ETC is likely to produce poorer quality code, as a great deal of useful semantic information is lost in the translation from occam-π to Extended Transputer assembly Code. 3.1. An Overview of the Execution Environment The scheduler is implemented as a number of C functions, called using the C stack, which must be available for this purpose. A separate stack (the workspace) is allocated for the occam-π program, indexed using the workspace-pointer (wptr). In addition to the workspace-pointer, a number of pointers are needed to hold the scheduling queues. The fptr and bptr, front- and back-pointer respectively, are used to keep the linked list of runnable processes. Runnable processes are inserted onto the back-pointer, and the next scheduled process is taken off the front-pointer. A timer queue is kept in the tptr, with the next timeout stored in tnext. The timer queue contains a list of processes ordered by timeout, and care must be taken to insert processes into the correct place in the queue. Listing 2 shows the state described above as defined in the generated C file. Depending on the architecture, it is possible to place one or more of these pointers in machine registers; shown here is the code generated for an Intel IA-32 compatible processor, where the workspace pointer wptr has been placed in the base pointer register ebp. On architectures with larger register sets, it is possible to keep a larger amount of scheduling pointers in registers, thereby improving scheduling performance slightly. register int ∗wptr asm("ebp"); int ∗fptr, ∗bptr, ∗tptr, tnext; Listing 2. Required state
The workspace grows upwards, and the workspace pointer is therefore initialised to point into the bottom of the allocated workspace. The scheduling pointers are initialised to be null. 3.2. Simple ‘goto’s ANSI C’s goto statement is used to transfer execution between well known points in the program. This is used to implement PROC invocations and complex loops which we cannot translate into while or for loops easily. 3.3. Manipulating State The program’s state, other than that shown in listing 2, is stored in the workspace, and indexed by the wptr variable. Storing a value into the workspace becomes a simple assignment using the workspace pointer as an index: wptr[1] = 2. Complex expressions are translated directly into C wherever practical, in order to let the C compiler perform any optimisations possible: wptr[2] = wptr[1] + (wptr[3] / 2);. Workspace is allocated and deallocated by modifying the address held in wptr. For example, allocating eight slots, each the size of the machine word, is done by subtracting from the wptr: wptr = wptr − 8;, deallocation by adding: wptr = wptr + 8;. 3.4. Manipulating Processes: Labels as Values The “labels as values” extension provides the ability to take the address of a label by using the && operator. The application of this operator produces the address of a variable, and can be used in arbitrary expressions and assignments. It is therefore possible to take the address of a label which may be needed in the future: &&L10.
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
275
Once we have obtained such a value, we can use it in, for example, the add to queue(int function, which takes a process descriptor (actually the workspace pointer) of a process, as well as its initial instruction pointer and stores it onto the scheduling queue. This function is used to make a function runnable, by adding it onto the scheduling queue: add to queue((int)(wptr − 4), (int) &&L13);.
∗wptr, void ∗iptr);
3.5. Calls and Returns: Computed ‘goto’s It is often necessary to use the address of a location that appears after the current statement. We can get the address of such locations by dropping labels in the code, which we can then reference using the ‘labels as values’ extension. Listing 3 shows the code for a call, used to invoke a PROC. This code needs to store the return address, which is done by placing the cur ip 42 label after the code implementing the call. The call code can then reference this label in order to put it in the return address slot in the new PROCs stack frame: wptr[−4] = (int) &&cur ip 42;. The remaining two assignments into the new stack frame are used to pass values to the new PROC. Finally the stack frame is allocated by moving the wptr up by four slots, and the goto is executed. wptr[−4] = (int) &&cur ip 42; wptr[−3] = wptr[2]; wptr[−2] = wptr[3]; wptr = wptr − 4; goto L8; cur ip 42: Listing 3. Calling a PROC
wptr = wptr + 4; goto ∗wptr[−4]; Listing 4. Returning from a PROC
The return address now resides in the stack frame allocated by the previous call. Eventually the call will return, at which point the address will be loaded out of the stack and used as an argument to a goto (listing 4). A special syntax is used to indicate that the argument of the goto is not a label in itself, but rather an expression containing the address of a label, a ‘computed goto’: goto ∗expression;. To return from a call the stack frame is deallocated: wptr = wptr + 4;, and the return address is read out from the stack and used in the goto. 3.6. Scheduling ‘Labels as values’ and the ‘computed goto’ are used to deal with scheduling points. An occam-π program has a fixed number of scheduling points, most often in the form of channel communications. When a scheduling point is reached, the location of the currently executing context must be stored so we can return to it later. A previously executing context must then be loaded, so that execution can be resumed. The prototype for the in function, which performs this task, is shown in listing 5. This is an explicit scheduling point, where execution will have to transfer elsewhere in the program. char ∗in(int nbytes, int ∗chan ptr, char ∗write start, char ∗iptr); Listing 5. Prototype for the ‘in’put part of a communication
The in function takes as arguments, the number of bytes which are being communicated, a pointer to the channel word on which the communication is occurring, a pointer to the location where the data should be put, and finally a pointer to the location where the current process would like execution resumed once the communication has successfully completed. The function returns the address where execution should resume once the in has completed
276
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
successfully. The address returned may be that of the process which engaged the other side of the communication. If this process is not yet ready, an address of an arbitrary process from the scheduling queue will be returned instead. A typical communication is shown in listing 6. goto ∗(in(4, (int ∗) wptr[1], (char ∗)(wptr + 3), &&id in aa)); id in aa: Listing 6. Performing a communication
When execution reaches the statement in listing 6, the in function will be called. The arguments given here are: 4, the number of bytes to transfer; wptr[1], dereference of a workspace location, containing the address of the channel being communicated on; wptr + 3, the location where the value which will be received is going to be stored; and lastly &&id in aa the address of the label located immediately following the in function. This address is stored by the in function, so that when the channel communication has completed and this process is rescheduled, execution can resume from that point, i.e. the label id in aa. Once in progress, the in function will, if the other side of the communication is already waiting, perform the copy of data into wptr + 3, and return, using the other sides resume address as the return value. If the other side is not yet ready, the in function will park the current process in the channel, and pick off a new process from the run-queue and return its address. It is only at some later point in the execution of the program, when another scheduling function returns the address of the label id in aa that execution will resume at this point. 3.7. Limitations of This Approach The approach for native code generation presented in this paper has some limitations. There are a number of ways in which these may be addressed, including the use of the method employed to perform gotos across function boundaries, as described in [27]. This approach is not entirely portable however, and in this paper we also discuss the possibility of interfacing directly with existing compiler frameworks (see section 5.1 on the facing page). While the scope of the approach described in this paper is limited to small monolithic programs, as described below, this does not leave it without merit. We are currently using it give us confidence that a compiler framework such as GCC can provide excellent performance for CSP based languages without an unnecessarily large development overhead. It is also possible to use the method described in this paper to target fairly small devices, such as sensor networks and the individual vector processing units on the Cell Broadband Engine. These platforms can be used for prototyping work, on which we can evaluate the benefit of native code compilation on small, constrained devices, before taking our approach further. • Code size limitations: occam-π programs are being compiled into one large monolithic function in order to enable the use of ‘labels as values’ and ’computed gotos’. This can present a problem for GCC, as it compiles and optimises a function at a time, and does not perform well when single functions become very large. Both compilation time, and memory usage can be substantial when compiling large monolithic functions. • External linkage: This is related to the compilation of occam-π programs as one monolithic function. It is not possible to cross a function boundary using a goto, and it is therefore not possible to combine several pre-compiled object files together to form a complete native occam-π program. Separate compilation can reduce compilation times, and allow for dynamic (runtime) linkage. Dynamic linkage is not a great concern on the targets we are currently dealing with. • Floating point arithmetic: C does not provide full control over the floating point unit. For example, it is not possible for a program to set rounding modes, or receive
277
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
floating point traps. This is potentially problematic for the generation of floating point code from occam-π programs, as occam-π does provide fine grained control over floating point operations. 4. Performance This section outlines some preliminary performance figures for native code generated using the 42 compiler. We present two benchmarks: commstime and matrix multiply. Commstime is used to measure the context switch time of a CSP based runtime, and highlights the time it takes to switch from one running process to another. We have also included a benchmark which performs a naive matrix multiplication, in order to assess the effectiveness of GCC in optimising generated looping code. The matrix multiply benchmark contains three nested loops which performs a matrix multiplication on two 500 by 500 arrays of integers. The initialisation of the two arrays is included in the measured time. Further benchmarks will be produced as the 42 compiler matures. The benchmarks have been run against KRoC 1.4.1-pre4, SPOC 1.3, and the Transterpreter 0.7. The benchmarks were executed on an unloaded Intel Pentium 4 CPU running at 3.60GHz, using a Linux 2.6.15 (preemptible) kernel. The first two entries (‘Generated C’) in tables 1 and 2 refer to the programs compiled to native C using the approach presented in this paper. Commstime Generated C, -O0 Generated C, -O22
nanoseconds 110 8
KRoC KRoC, -io -is -it
14 12
SPOC, -O0 SPOC, -O2
65 31
Transterpreter
147
Table 1. Commstime, context switch times
Matrix Multiply Generated C, -O0 Generated C, -O22
milliseconds 1707 1052
KRoC KRoC, -io -is -it
1939 1930
SPOC, -O0 SPOC, -O2
1486 683
Transterpreter
35870
ANSI C -O0 ANSI C -O2
1444 672
Table 2. Matrix multiplication times
While the benchmark presented are preliminary and should not be taken as conclusive, they do offer an indication that our approach is worth further investigation. The commstime benchmark currently indicates that the approach presented has a context switch time which is slightly faster than KRoC’s. The Transterpreter scheduler, which is also used in the generated C code, does not however deal with priority, which may account for some of the difference in speed. The matrix multiply benchmark illustrates the value of relying on optimisations already present in the GCC compiler. This leaves the C generator with the significantly simpler task of producing optimisable code, rather than optimised code. 5. Future Directions 5.1. Using a Compiler Framework In this paper, we have concentrated on producing fast and portable native code for the occamπ language, and languages like it. We have placed particular emphasis on ensuring portability, 2
additionally the --fomit-framepointer and --inline-functions flags have been used.
278
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
to enable code-generation for a large number of architectures with minimum effort. In our effort to compile to as many architectures as possible, many existing compiler frameworks become unavailable to us, as they only target four to five of the ‘major’ architectures (i.e. platforms such as the IA-32, PowerPC and Sparc). None of the compiler frameworks which the authors have investigated, including: C-- [34], LLVM [35], Zephyr [36] and MLRISC [37], provide the desired level of portability, as compared with the GNU Compiler Collection. They do however, present more friendly compiler targets than GCC. 5.2. Using GCC as a Compiler Framework The GNU Compiler Collection [38] (GCC) contains a number of language front-ends and target back-ends. GCC currently supports the compilation of C, C++, Java, Ada, ObjectiveC and Fortran95, and generates code for a large number of architectures, listed on the GCC compilers backend status page [39]. Further, a number of front- and back-ends are in development, many with the aim of being integrated into GCC proper, once a suitable point of maturity has been reached. These include front-ends for BCPL [40], Modula-2 [41] and the logic programming language Mercury [42]. Additionally the Treelang front-end serves as an example of how to write a front-end for GCC. Back-ends for the MSP430 [43] and Cell Broadband Engine [44] are under active development. GCC started out as a C compiler, targeting only the major architectures used by the GNU project, but has in recent years been progressing towards becoming a more generic compiler construction framework. GCC has over the years, in no small part due to its open nature, evolved into a multi-language compiler, which targets over 30 platforms. While it has been notoriously difficult to write a new language front-end for the GCC compiler in the past, better documentation [45,46], example front-ends, and an internal representation less biased towards C/C++-like procedural languages has made this task easier. Writing a language frontend for GCC is still no trivial undertaking, as a language front-end must interface directly with GCC, and produce in memory GENERIC datastructures [47], which can then be lowered to other internal datastructures automatically, in order to perform optimisations and code generation. 5.2.1. Interfacing occam-π with GCC This paper has been describing a method of interfacing with GCC through the use of C as an intermediary language. This comes at the cost of expressiveness, due to the constraints of the C language. The harder, but more flexible way to use GCC is to interface with it directly, and make the front-end generate GCC’s internal datastructures. This however requires an intimate understanding of the internals of GCC, which requires considerably more effort than generating C code. Furthermore, it is yet to be determined if GCC’s internal program representation GENERIC is able to represent occam-π programs. Since GENERIC is designed as a language independent representation, this is not likely to be as big an issue as it would have in the past. We chosen to evaluate the potential performance of using GCC as a back end for occamπ by using C as an intermediary language, rather than starting with the more tricky direct integration into GCC. As we have shown that excellent performance, and portability can be obtained from GCC, we are confident in our ability to provide a front-end for GCC in the future. 6. Conclusions This paper has demonstrated the feasibility of automatic generation of C code from occam-π programs, which, when compiled to native code, exhibits very good performance, both in
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
279
terms of context switch times, and sequential code. We have demonstrated that existing compilers and code generators can be used to generate native occam-π programs with performance which compares very favourably with existing solutions. The potential problems concerning code size limitations and the lack of separate compilation have been identified. In the immediate future however, we are most interested in using this method to target relatively small devices such as the MSP430 and the vector processing units in the Cell Broadband Engine, where these problems are not likely to present a great concern for our prototyping efforts. Eventually, we would like to overcome these problems by taking a more direct route toward native code than through C, while not sacrificing portability. We will be investigating the use of the currently experimental 42 compiler for the Transterpreter, and making it generate native code using an existing code generation framework. While options such as C-- and LLVM seem like attractive targets, they do not currently target nearly as many languages as the GNU Compiler Collection. References [1] Christian L. Jacobsen and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Dr. Ian R. East, Prof David Duce, Dr Mark Green, Jeremy M. R. Martin, and Prof. Peter H. Welch, editors, Communicating Process Architectures 2004, volume 62 of Concurrent Systems Engineering Series, pages 99–106. IOS Press, Amsterdam, September 2004. [2] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, 1985. [3] R. Milner. Communicating and Mobile Systems: the Pi-Calculus. Cambridge University Press, 1999. ISBN-10: 0521658691, ISBN-13: 9780521658690. [4] P.H. Welch and F.R.M. Barnes. Communicating mobile processes: introducing occam-pi. In 25 Years of CSP, volume 3525 of Lecture Notes in Computer Science, pages 175–210. Springer Verlag, April 2005. [5] Christian L. Jacobsen and Matthew C. Jadud. Towards concrete concurrency: occam-pi on the lego mindstorms. In SIGCSE ’05: Proceedings of the 36th SIGCSE technical symposium on Computer science education, pages 431–435, New York, NY, USA, 2005. ACM Press. [6] moteiv. Tmote Sky. http://www.moteiv.com/products-tmotesky.php, 2006. [7] Matthew C. Jadud, Christian L. Jacobsen, and Damian J. Dimmich. Concurrency on and off the sensor network node. SEUC 2006 workshop, 2006. [8] J. A. Kahle, M. N. Day, H. P. Hofstee, C. R. Johns, T. R. Maeurer, and D. Shippy. Introduction to the Cell multiprocessor. IBM Journal of Research & Development, 49(4/5):589–604, July/September 2005. [9] Damian J. Dimmich, Christian L. Jacobsen, and Matthew C. Jadud. A cell transterpreter. In Peter Welch, Jon Kerridge, and Fred Barnes, editors, Communicating Process Architectures 2006, Concurrent Systems Engineering Series. IOS Press, Amsterdam, September 2006. [10] P.H. Welch and D.C. Wood. The Kent Retargetable occam Compiler. In Brian O’Neill, editor, Parallel Processing Developments, Proceedings of WoTUG 19, volume 47 of Concurrent Systems Engineering, pages 143–166. World occam and Transputer User Group, IOS Press, Netherlands, March 1996. ISBN: 90-5199-261-0. [11] P.H. Welch and F.R.M. Barnes. The KRoC Home Page, 2006. Available at: http://www.cs.ukc.ac. uk/projects/ofa/kroc/. [12] Michael D. Poole. Occam for all - Two Approaches to Retargetting the INMOS Compiler. In Brian C. O’Neill, editor, Proceedings of WoTUG-19: Parallel Processing Developments, pages 167–178, feb 1996. [13] Frederick R. M. Barnes. tranx86 – An Optimising ETC to IA32 Translator. In Alan G. Chalmers, Majid Mirmehdi, and Henk Muller, editors, Communicating Process Architectures 2001, pages 265–282, sep 2001. [14] M. Debbage. Southampton’s portable occam compiler (spoc), 1994. [15] Spoc sourcecode. http://www.hpcc.ecs.soton.ac.uk/software/spoc/. [16] INMOS Limited. occam2 Reference Manual. Prentice Hall, 1984. ISBN: 0-13-629312-3. [17] INMOS Limited. occam Programming Manual. Prentice Hall, 1984. [18] Andrew S. Tanenbaum, Hans van Staveren, E. G. Keizer, and Johan W. Stevenson. A practical tool kit for making portable compilers. Commun. ACM, 26(9):654–660, 1983. [19] Kees Bot and Edwin Scheffer. An occam compiler. Technical report, Vrije Universiteit Amsterdam, The Netherlands, February 1987.
280
C.L. Jacobsen et al. / Native Code Generation Using the Transterpreter
[20] The Amsterdam Compiler Kit. http://tack.sourceforge.net/, 2006. [21] Ian Stark and Kevin Mitchell. uOCCAM, CS3 individual programming project, 2000-2001. [22] Matthew C. Jadud, Damian J. Dimmich, and Christian L. Jacobsen. 42 compiler homepage. http://www.transterpreter.org/wiki/42, 2006. [23] Fred Barnes. NOCC compiler homepage. http://www.cs.kent.ac.uk/projects/ofa/nocc/, 2006. [24] Peter Tr¨oger, Martin von L¨owis, and Andreas Polze. The grid-occam project. In Mario Jeckle, Ryszard Kowalczyk, and Peter Braun II, editors, GSEM, volume 3270 of Lecture Notes in Computer Science, pages 151–164. Springer, 2004. [25] Simon L. Peyton Jones, Cordelia V. Hall, Kevin Hammond, Will Partain, and Philip Wadler. The glasgow haskell compiler: a technical overview. In Proc. UK Joint Framework for Information Technology (JFIT) Technical Conference, 93. [26] D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler. The nesC language: A holistic approach to networked embedded systems, 2003. [27] Fergus Henderson, Thomas Conway, and Zoltan Somogyi. Compiling logic programs to C using GNU C as a portable assembler. In ILPS’95 Postconference Workshop on Sequential Implementation Technologies for Logic Programming, pages 1–15, Portland, Or, 1995. [28] Manuel M. T. Chakravarty, Sigbjorn Finne, Simon Marlow, Simon Peyton Jones, Julian Seward, and Reuben Thomas. The Glasgow Haskell Compiler commentary — the Evil Mangler. http://www.cse.unsw.edu.au/˜chak/haskell/ghc/comm/the-beast/mangler.html, May 2005. [29] J. Moores. CCSP – a Portable CSP-based Run-time System Supporting C and occam. In B.M.Cook, editor, Architectures, Languages and Techniques for Concurrent Systems, volume 57 of Concurrent Systems Engineering series, pages 147–168, Amsterdam, the Netherlands, April 1999. WoTUG, IOS Press. [30] Samuel P Harbison and Guy L Steele. C: A Reference Manual. Prentice Hall, Upper Saddle River, NJ, USA, fifth edition, February 2002. [31] Ian Piumarta and Fabio Riccardi. Optimizing direct threaded code by selective inlining. In PLDI ’98: Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation, pages 291–300, New York, NY, USA, 1998. ACM Press. [32] M. Anton Ertl, David Gregg, Andreas Krall, and Bernd Paysan. Vmgen — a generator of efficient virtual machine interpreters. Software—Practice and Experience, 32(3):265–294, 2002. [33] M.D.Poole. Extended Transputer Code - a Target-Independent Representation of Parallel Programs. In P.H.Welch and A.W.P.Bakkers, editors, Architectures, Languages and Patterns for Parallel and Distributed Applications, volume 52 of Concurrent Systems Engineering, Address, April 1998. WoTUG, IOS Press. [34] Simon L. Peyton Jones, Norman Ramsey, and Fermin Reig. C--: A portable assembly language that supports garbage collection. In PPDP ’99: Proceedings of the International Conference PPDP’99 on Principles and Practice of Declarative Programming, pages 1–28, London, UK, 1999. Springer-Verlag. [35] Chris Lattner and Vikram Adve. LLVM: A compilation framework for lifelong program analysis & transformation. In CGO ’04: Proceedings of the international symposium on Code generation and optimization, page 75, Washington, DC, USA, 2004. IEEE Computer Society. [36] A. Appel and J. Davidson. The Zephyr compiler infrastructure. 1999. [37] Lal George and Allen Leung. MLRISC a framework for retargetable and optimizing compiler back ends. Technical report, January 2003. [38] GCC Team. GNU compiler collection homepage. http://gcc.gnu.org/, 2006. [39] GCC Team. GCC backends: Status of supported architectures from maintainers’ point of view, April 2006. Available from: http://gcc.gnu.org/backends.html. [40] Thomas Crick. A BCPL front end for GCC . Bsc (hons) computer science project, University of Bath, May 2004. [41] Gaius Mulley. A report on the progress of GNU Modula-2 and its potential integration into GCC. In Proc. GCC Developers’ Summit, pages 109–124, Ottawa, Canada, June 2006. University of Glamorgan. [42] The Mercury Project. Mercury front-end for GCC. http://www.mercury.cs.mu.oz.au/download/gccbackend.html, 2006. [43] Mspgcc homepage. http://mspgcc.sourceforge.net/. [44] Ulrich Weigand. Porting the GNU tool chain to the Cell architecture. In Proc. GCC Developers’ Summit, pages 185–189, Ottawa, Canada, June 2005. IBM Deutschland Entwicklung GmbH. [45] Andreas Bauer. Creating a portable programming language using open source software. Proceedings of USENIX 2004 Annual Technical Conference, pages 103–113, June 2004. [46] Free Software Foundation, Inc. GCC internals. A GNU manual, 2005. [47] Jason Merrill. GENERIC and GIMPLE: A new tree representation for entire functions. In Proc. GCC Developers’ Summit, pages 171–180, Ottawa, Canada, June 2005. Red Hat, Inc.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
281
Compositions of Concurrent Processes a
Mark BURGIN a and Marc L. SMITH b,1 Department of Computer Science, University of California, Los Angeles Los Angeles, California 90095, USA b Department of Computer Science, Vassar College Poughkeepsie, New York 12604, USA
Abstract. Using the extended model for view-centric reasoning, EVCR, we focus on the many possibilities for concurrent processes to be composed. EVCR is an extension of VCR, both models of true concurrency; VCR is an extension of CSP, which is based on an interleaved semantics for modeling concurrency. VCR, like CSP, utilizes traces of instantaneous events, though VCR permits recording parallel events to preserve the perception of simultaneity by the observer(s). But observed simultaneity is a contentious issue, especially for events that are supposed to be instantaneous. EVCR addresses this issue in two ways. First, events are no longer instantaneous; they occur for some duration of time. Second, parallel events need not be an all-or-nothing proposition; it is possible for events to partially overlap in time. Thus, EVCR provides a more realistic and appropriate level of abstraction for reasoning about concurrent processes. With EVCR, we begin to move from observation to the specification of concurrency, and the compositions of concurrent processes. As one example of specification, we introduce a description of I/O-PAR composition that leads to simplified reasoning about composite I/O-PAR processes. Keywords. event, trace, composition, process, true concurrency, I/O-PAR.
Introduction The motivation for view-centric reasoning (VCR) [13] stemmed from a desire to preserve more information about the history of a computation than is preserved in a traditional CSP trace. With the CSP model of concurrency, Hoare introduced the powerful notion that one could reason about a computation by reasoning about its trace of observable events [4]. CSP is an interleaved model of concurrency, which means while some events may appear to occur simultaneously during a computation, an observer records such occurrences in an arbitrary, interleaved fashion. Such behavior by a computation’s observer results in a decrease of entropy (i.e., imposing an order of events that did not actually exist during the computation) within the CSP trace. VCR begins to address this issue with entropypreserving metaphors and abstractions, such as lazy observation; multiple, possibly imperfect observers; and parallel event traces. In particular, VCR traces were defined over multisets of observable events, rather than atomic events. The laziness stems from sparing the observer the stressful decision of what order to record simultaneous events. Furthermore, VCR distinguishes two types of trace: a computation’s history, and its corresponding views. Since it is possible for a distributed computation to be observed by more than one observer, consequences of relativity theory provide for the possibility of different views of the same computation. Thus, the parallel 1
Corresponding Author:
[email protected] 282
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
event traces of VCR extend CSP in two important ways. First, VCR provides the entropypreserving abstraction of parallel events, and second, it provides a model for associating a computation’s history with its multiple, possibly imperfect, but realistic views. VCR’s contribution to the theory of CSP is not as general as it could be; a model of true concurrency should support abstractions for events whose occurrence partially overlap in time. Thus, the possibility of observed event simultaneity is not merely a Boolean proposition, but rather a continuum: events A and B may overlap entirely, partially, or not at all. Extended VCR (EVCR), which is introduced by the authors in [3], is the next step in the evolution of CSP from a model that reduces concurrency to an interleaving of sequential events, to a model of true concurrency that provides abstractions that represent degrees of interleaving. EVCR provides abstractions that reflect the possibilities of behavior that arise when composing today’s loosely-coupled, distributed systems, and should therefore prove beneficial for modeling and reasoning about such computation. In this paper, we extend the initial exposition of EVCR in [3], paying special attention to compositions of concurrent processes. Compositions of systems and processes become an important tool for the development of modern hardware and software systems with complex sets of requirements. Unfortunately, as Neumann [11] writes, there is a huge gap between theory and common practice: system compositions at present are typically ad hoc, based on the intersection of potentially incompatible component properties, and dependent on untrustworthy components that were not designed for interoperability – often resulting in unexpected results and risks. Results of this paper give additional evidence to this statement. The analysis of real systems and processes made possible to find new types even for such a well-known operation as sequential composition. This was possible because the authors considered not only data-flow relations between systems and processes used in the conventional sequential composition, but also temporal and causal relations. In the first section, we develop a rigorous setting for studying and reasoning about concurrent processes. We give strict definitions of different kinds of events and processes. In the second section, we introduce and study basic temporal relations that exist between events in concurrent processes. The topic of the third section is composition of concurrent processes. Section 4 presents a case study of I/O-PAR composition, and the benefits of EVCR abstractions for reasoning about the specification and properties of composite I/OPAR processes. 1. Systems, Processes and Events The basic concept of a theory of concurrent processes is event. We consider two types of events: abstract or process events and embodied or system events. Usually, a system belongs to some environment, while a process interacts (communicates) with other processes, which form the environment of the first process. Taking into account this natural structure, we come to three categories of events. Definition 1. An event in a system R (of a process P) is a change of [the state, or phase] of the system R (in the process P). For instance, an event in such system as a finite automaton or finite state machine is a state transition of this automaton or machine. Sending a message is an event in a network. Printing a symbol or a text, producing a symbol on the screen of the display, and calculating 2 + 2 are events in a computer. The latter event can be also an event in a calculator.
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
283
It is natural to consider events as multisets (cf., for example, [13]). Thus, a process can contain several copies of the same event. At the same time, it is possible that two or more processes contain different copies of an event in some cases, while they have a common copy (or several common copies) of the same event. For instance, taking such an event as sending a message, we see that different copies of this event belong, as a rule, to many different processes. At the same, each copy of such an event as information (data) exchange between two processes necessarily belongs to two processes. Definition 2. An event by a system R (a process P) is a change in the environment of the system R (the process P) caused by R. For instance, when a system R sends information to another system, it is an event by this system R. Definition 3. An event for a system R (a process P) is a change in relations between the system R (the process P) and its environment. For instance, when a system R was connected to another system and then this connection is broken, it is an event for this system R. Definition 4. When the change is detectable, i.e., there are means to discern this change, the corresponding event is called observable. Not all events are observable if there are, for example, indistinguishable states in a system or continuous changes are observed in a system where it is impossible to discern the next state of the system. The question is why do we need unobservable events? The reason for this is that in some cases, a researcher can build a better model of a process taking not only observable events. For instance, in physics, the majority of physical quantities take value in the set of all real numbers. Any interval of real numbers contains uncountable quantity of numbers. However, it is known that measurement can give only a finite number of values. Thus, the majority of values of continuous physical quantities, such as speed, acceleration, temperature, etc., are unobservable but models that use real numbers are more efficient than finite models. Definition 5. A process is a system of related events. This is a very general definition. For example, one event may be considered as a process. Several events in some system also may be considered as a process. However, it is reasonable to have, at first, such a general definition and then to separate different kinds of processes relevant to concrete situations. It is also necessary to remark that this definition of a process depends on our comprehension and/or description. There are two types of processes: abstract processes, which consist of abstract events, and embodied processes, consist of events in some system. There are also two types of systems: structural or semiotic, such as a finite automaton or Turing machine, and physical, such as computers, servers, routers, the World Wide Web. As a result, embodied processes are subdivided into two classes: structurally and physically embodied. Definition 6. A process P is called finite if it consists of a finite number of events. Otherwise, P is called infinite. For a long time, computer scientists studied and accepted only finite processes. The finiteness condition was even included in definitions of the main mathematical models of
284
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
algorithms, such as Turing machines. However, infinite processes become more and more important in theoretical studies and practical applications (cf., for example [2, 12, 14]). Definition 7. Any subset of a process P in which all relations between events are induced by relations in P is called a subprocess of P. Proposition 1. Any subprocess of a subprocess of P is a subprocess of P. 2. Temporal Relations between Events There are different relations between events in a process, as well as between events in a system. The pivotal is the ordering relation that defines time in the process or is defined by time in the system to which these events belong. Analyzing temporal relations in real computational and other processes, we can distinguish several types of event pairs: x x x x x x
A sequential pair of events consists of two events where one ends before the next starts. A simultaneous pair of events consists of two events where both events start and end simultaneously. A coexisting pair of events consists of two events where one starts before the next ends. A separable pair of events consists of two events that are not coexisting. A separated pair of events consists of two events for which coexistence is excluded. An event r is included in an event q if the event q starts before or when the event r starts and the event q ends after or when the event r ends.
Types of events are formally determined by the corresponding binary relations on sets of events: x
x x
x
x
The coexistence relation CER. This relation allows several interpretations, reflecting different modalities. It can formalize situations in which events must go so that one starts before another is finished (prescriptive coexistence). It can also formalize situations in which one event really starts before another is finished (actual coexistence). It can as well formalize situations in which one event may start before another is finished (possible coexistence). The separability relation SPR. This relation shows when events are (or may be) not coexisting. The ordering relation ODR. This relation allows several interpretations. It can formalize the order in which events must happen (prescriptive order), e.g., at first, we need to compute the value of a function, and only then to print the result. It can also formalize the order in which events really happened (actual order). It can as well formalize the order in which events may happen (possible order). The simultaneity relation STR. This relation allows several interpretations. It can formalize situations in which events must go simultaneously (prescriptive coexistence). It can also formalize situations in which events really go simultaneously (actual coexistence). It can as well formalize situations in which events may go simultaneously (possible coexistence). The inclusion relation ICR. This relation allows several interpretations. It can formalize situations in which events are constrained so that the included event ends
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
285
before or when the other is finished and starts after or when the other starts (prescriptive inclusiveness). It can also formalize situations in which the included event really ends before or when the other is finished and really starts after or when the other starts (actual inclusiveness). It can as well formalize situations in which it is possible that the included event ends before or when the other is finished and starts after or when the other starts (possible inclusiveness). EVCR actually models arbitrary processes. Some illustrations of the above relations may be found in music for the piano, where the musical notes are events. A piece of music is composed from individual notes, each produced by a corresponding key on the piano, according to a musical prescription, represented in the form of sheet music. The sheet music prescribes when to play separated notes in succession (e.g., a run), and when to play notes in different possible coexisting fashions. Chords, arpeggios, melodies, harmonies, and syncopated rhythms are but a few examples of what can be prescribed by different combinations of the above relations. Upon performing such a piece of music, members of the audience subsequently make a determination of how well the actual composition of musical notes match what they believe was prescribed. In some cases, the result is a round of applause. Returning our attention to grid automata, natural axioms on the above relations are derived from an analysis of real computations and other processes: Axiom (CP 1). CER is a complement of SPR . An informal meaning of this is given by the following statement: Corollary 1. Any two events are either coexisting or separable. Axiom (CP 2). ODR is a subset of SPR . An informal meaning of this is given by the following statement: Corollary 2. Any two ordered events are separated. Axiom (CP 3). STR and ICR are subsets of CER . An informal meaning of this is given by the following statement: Corollary 3. Any two simultaneous events (of which one is included into another) are coexisting. Definition 8. A binary relation Q is called a tolerance on a set X or of a set X if it is reflexive, i.e. xQx for all x from X, and symmetric, i.e., xQy implies yQx for all x, y from X. Axiom (CP 4). CER is a tolerance. An informal meaning of this is given by the following statement: Corollary 4. Any event is coexisting with itself. Axiom (CP 5). STR is an equivalence. This implies the following property:
286
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
Corollary 5. Any event is simultaneous with itself. Definition 9. A binary relation Q is called a preorder (or quasiorder) on a set X if it is reflexive, i.e. xQx for all x from X, and transitive, i.e., xQy and yQz imply xQz for all x, y, z from X. A preorder that also is antisymmetric is a partial order. Axiom (CP 6). ODR is a partial quasiorder. Axiom (CP 7). SPR is reflexive and symmetric. Axiom (CP 8). ICR is a partial order. An informal meaning of this is given by the following statement: Corollary 6. Any event is included into itself. These relations define connections between events: Definition 10. If Q is binary relation, then the transitive closure Q* of Q is the smallest transitive binary relation that contains Q. As the intersection of transitive binary relations is a transitive relation, the transitive closure of a relation is unique. Definition 11. Two events a and b are existentially connected if aCE R*b. Informally, the existential connectedness of the events a and b means that there is sequence of events a0 , a1 , a2 , a3 , … , an such that a0 = a, an = b, and ai-1 coexists with ai for all i = 1, 2, 3, … , n. Proposition 2. CER* is an equivalence relation. Definition 12. Two events a and b are sequentially connected if aODRb. Sequential connectedness is a base for interleaving as the following result demonstrates: Theorem 1. Processes P1 , P2 , P3 , … , Pn are (potentially) interleaving if the relation ODR can be extended to a total order on the set of all events from P1 , P2 , P3 , … , Pn. Definition 13. Two events a and b are parallelly connected if aSTRb. Let us consider some important types of complex events: Definition 14. A parallel event is a group of simultaneous events. Proposition 3. For any event a from a finite number of finite processes, there is a maximal parallel event that contains a. Definition 15. A complete parallel event is a group of all events that are simultaneous to one event from this group.
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
287
Proposition 4. A complete parallel event does not depend on the choice of the event to which all other events from this group are simultaneous. Proposition 5. A complete parallel event is a maximal parallel event. Proposition 6. If STR = CER, then ODR induces an order relation on the set of all complete parallel events and it is possible to make this order linear. Definition 16. A coexisting event is a group of events in which each pair of events is coexisting. Proposition 7. Any parallel event is a coexisting event. However, in general, not every coexisting event is a parallel event. These concepts coincide if and only if STR = CER. Definition 17. A complete coexisting event is a maximal coexisting event. As the simultaneity relation STR is a subset of the coexistence relation CER, we have the following result: Proposition 8. A complete coexisting event includes as subsets all parallel events of its elements. Proposition 9. Two events a and b are simultaneous if and only if a is included in b and b is included in a, or formally, bSTRa aICRb & aICRb. Corollary 7. STR = ICR ICR-1. It is possible to extend all introduced relations from binary to n-ary relations on sets of events for any n > 1. To do this, we assume that each event a takes some interval of time t(a). Note that such an interval t(a) can have a zero length, i.e., to be a point of time. Let us denote by bt(a) time of the beginning of the event a and by et(a) time of the end of the event a. Then: x x x x x
The ordering relation ODRn : for events a1 , a2 , … , an , ODRn(a1 , a2 , … , an ) means that et(ai) d bt(ai+1) for all i = 1, 2, …, n – 1. The simultaneity relation STRn : for events a1 , a2 , … , an , STRn(a1 , a2 , … , an ) means that et(ai) = et(aj) and bt(ai) = bt(aj) for all i , j = 1, 2, …, n. The inclusion relation ICRn : for events a1 , a2 , … , an , ICRn(a1 , a2 , … , an ) means that et(ai) d et(ai+1) and bt(ai) t bt(ai+1) for all i = 1, 2, …, n – 1. The coexistence relation CERn: for events a1 , a2 , … , an , CERn(a1 , a2 , … , an) means that i=1n t(ai) is not void. The separability relation SPRn . This relation shows when any pair of events a1 , a2 , … , an is (or may be) not coexisting.
For some of these relations, there is a natural correspondence with their binary analogues: Proposition 10. ODR2 = ODR , STR2 = STR , ICR2 = ICR , CER2 = CER , and SPR2 = SPR .
288
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
Proposition 11. ODRn (a1 , a2 , … , an) is true if and only if aiODRai+1 is true for all i = 1, 2, …, n – 1. Proposition 12. ICRn (a1 , a2 , … , an) is true if and only if aiICRai+1 is true for all i = 1, 2, …, n – 1. Proposition 13. STRn (a1 , a2 , … , an) is true if and only if aiSTRaj are true for all i , j = 1, 2, …, n. Similar property for the coexistence relation CERn is true only in a linear time. Namely, we have the following result: Proposition 14. CERn (a1 , a2 , … , an) is true if and only if aiCERaj are true for all i , j = 1, 2, …, n. For non-linear time, this is not always correct as the following example demonstrates: Example 1. Let us consider cyclic time [1] that goes from 1 to 12, while after 12 goes 1. Taking events a, b, and c for which t(a) = [1, 7], t(b) = [5, 11], and t(c) = [10, 2], we have that aCERb , cCERb and aCERc are true, while CER3(a, b, c) is not true. 3. Compositions of Processes Definition 18. Composition is an operation that combines two or more processes into one process. If c is a composition operator and Pi , i I, are processes, then the result c(Pi , i I) is a process – also called a composition of the processes Pi . For instance, the result of composition c of processes P and Q is denoted by c(P, Q). Remark 1. Very often composition of systems induces composition of processes in these systems. Definition 19. Composition c of processes P and Q is called faithful if both of them become, i.e., are isomorphic to, subprocesses of the result c(P, Q) of this composition. Definition 20. Composition c of processes P and Q is called exact if both of them become, i.e., are isomorphic to, disjoint subprocesses of the result c(P, Q) of this composition. Example 2. Let us consider the sequential composition W of three processes A, B, and C where A is the input process of data 7 and 11, B is the process of calculating/computing 7 + 11, and C is the process of printing the result of this calculation. If we denote by P the sequential composition of processes A and B, and by Q the sequential composition of processes B and C, then W is a faithful but not exact composition of processes P and Q. Definitions show that any exact composition is faithful. Proposition 1 implies the following result: Proposition 15. If c is a faithful (exact) composition of processes P and Q and d is a faithful (exact) composition of processes c(P, Q) and R, then d(c(P, Q), R) is a faithful (exact) composition of processes P, Q and R.
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
289
Definition 21. Composition c of processes P and Q is called strict if any event from c(P, Q) belongs either to P or Q, or to both of them. Proposition 16. If c is a strict composition of processes P and Q and d is a strict composition of processes c(P, Q) and R, then d(c(P, Q), R) is a strict composition of processes P, Q and R. The most popular composition is the sequential composition of systems, functions, algorithms, and processes. The first formalized form of the sequential composition was elaborated for functions in mathematics. A similar construction for processes results in the following concept: Definition 22. The sequential composition PsQ of processes P and Q combines them so that the process PsQ contains all events from P and Q and relations between them, all output data of the first process are taken as input data of the second one, and any events a from P and b from Q, we have aODRb, i.e., all events from P precede all events from Q. The sequential composition is often used in computer science to prove different properties of automata, algorithms, and formal languages. For instance, the sequential composition of finite automata is used in [5] to prove that any regular language is the language accepted by some finite automaton or to show that the class of all regular languages is closed with respect to concatenation. In [2], sequential compositions are used to prove that inductive Turing machines can compute (generate) and decide the whole arithmetical hierarchy. However, for computational and other processes, it is possible to build more kinds of sequential composition: Definition 23. The weak sequential composition PwsQ of processes P and Q combines them so that the process PwsQ contains all events and relations between them from P and Q and for any events a from P and b from Q, we have aODRb, i.e., all events from P precede all events from Q. Example 3. If we compute 7 + 11 by a process P and after compute 812 by a process Q using the same computer, then the whole process W will be the weak sequential composition of processes P and Q. Remark 2. When a process P produces no output data, then for any process Q, the weak sequential composition of processes P and Q coincides with the sequential composition of processes P and Q. Remark 3. When a process P produces no output data, it does not mean that the process P, gives no result. The result of the processes P can be change in some system, information transition or reception, etc. Proposition 17. Any (weak) sequential composition is exact and strict. Definition 24. The causal sequential composition PsQ of processes P and Q combines them so that the process PsQ contains all events from P and Q and relations between them, any events a from P and b from Q, we have aODRb, and the first process transmits information to the second one.
290
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
It is natural to make a distinction between data and control information (instructions, commands, etc.) This separates the causal sequential composition into three subclasses: data-driven, control-incited and combined sequential composition. Data-driven composition is given in Definition 18. Definition 25. The control-incited sequential composition PsQ of processes P and Q combines them so that the process PsQ contains all events from P and Q and relations between them, any events a from P and b from Q, we have aODRb, and the first process sends some control information (e.g., an instruction or rule) that is used in the second process. It is possible that the second process uses both data and control information from the first process. In this case, we have the combined sequential composition. The sequential composition is related to sequential processes: Definition 26. A process P is called sequential if for any two events a and b from P, we have bODRa or aODRb – i.e. one event from any two precedes the other one. Lemma 1. Any finite sequential process has the first and last events. Proposition 18. Any subprocess of a sequential process is a sequential process. Definition 27. Two events a and b are directly sequentially connected if aODRb and there is no event c such that relations aODRc and cODRb are true. Proposition 19. Any sequential process is a sequence of its directly sequentially connected events. Definition 28. A subprocess Q of a sequential process P is called complete if for any two events a and b from Q, relations aODRc or cODRb imply that c also belongs to Q. Proposition 20. Any sequential process is the sequential composition of, at least, two of its complete subprocesses. Theorem 2. The weak sequential composition of two sequential processes is a sequential process. As any sequential composition of processes is also the weak sequential composition, we have the following result: Corollary 8. The sequential composition of two sequential processes is a sequential process. This gives us an algebraic structure on the set of all sequential processes. Corollary 9. The set of all sequential processes (in some system) is a semigroup with respect to weak sequential composition. Remark 4. The above result if valid for any composition operator (not all of which are associative).
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
291
Definition 29. The parallel composition PparQ of processes P and Q combines them so that (1) the process PparQ contains all events from P and Q and relations between them and (2) any event a from P is coexisting to some event b from Q (i.e. we have aCERb). Proposition 21. Any parallel composition is exact. Proposition 22. If R is a subprocess of P, then RparQ is a subprocess of PparQ (for any process Q). Remark 5. However, if R is a subprocess of Q, then PparR is not necessarily a subprocess of PparQ (for any process P). This shows that parallel composition of processes is not symmetric. Definition 30. The composition PparQ of processes P and Q is strictly parallel if (1) the process PparQ contains all events from P and Q and relations between them, (2) any event b from Q is coexisting to some event a from P (i.e. we have bCERa), and (3) any event a from P is coexisting to some event b from Q (i.e. we have aCERb). Lemma 2. Any strictly parallel composition is a parallel composition. This means that the strictly parallel composition inherits many properties of the parallel composition: Proposition 23. Any strictly parallel composition par of processes is associative: i.e. for any processes P, R and Q, we have (PparQ)parR = Ppar(QparR). Remark 6. It is possible to show that sequential and parallel compositions of processes are induced by temporal sequential and parallel compositions of systems. Temporal sequential composition allows one to build sequential composition of system with itself. Parallel composition of processes corresponds to the logical operation “and”. It is also possible to build a composition of processes that corresponds to the logical operation “or.” It is called selective composition. Let P(x) be a predicate (function that takes value 0 and 1): Definition 31. The selective composition P(x)(P | Q) of processes P and Q is equal to the process P when P(x) = 1 and equal to the process Q when P(x) = 0. Events can be elementary and complex. Complex events consist of other events. Thus, it is possible to consider a process as complex event. Moreover, any set of events can formally be considered as one complex event when we ignore relations between the initial events. Proposition 24. If aICRb and aICRc, then bCERc. Corollary 10. If a is an elementary event, aCERb and aCERc, then bCERc. Proposition 7 implies the following result: Corollary 11. Coexisting elementary events are simultaneous. Proposition 25. An event simultaneous with an elementary (for a set E) event is itself elementary (for E).
292
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
Definition 32. A faithful composition c of processes P and Q is quasi-parallel if the processes P and Q considered as complex events are parallel events in c(P, Q). Remark 7. It is not always that a quasi-parallel composition of two processes is parallel and vice versa. If a composition c is quasi-parallel, it means that finite processes P and Q start and end at the same time. It gives us the following result. Proposition 26. If a composition c is quasi-parallel, then the first events from P and Q are simultaneous and the last events from P and Q are simultaneous. There are many other kinds of process composition, one of which is considered in the next Section. All introduced constructions of EVCR allow one to build VCR as submodel of EVCR. In that submodel, only the simultaneity relation STR is considered because all events are instantaneous. 4. Composition of I/O-PAR Processes Why devote so much time to exploring so many different types of composition? One reason is that thinking deeply about composition in general may lead to new ways of reasoning about known properties of specific types of composition. For example, consider the composition of I/O-PAR processes, studied by Welch, Martin, and others [8, 9, 15, 16]. Welch proved that a network of I/O-PAR processes is deadlock-free, and described composite-I/O-PAR processes, that could in turn be used in further compositions. (The I/OSEQ design pattern for processes is also presented, but not focused on for the purposes of this discussion.) The proofs of deadlock freedom and that I/O-PAR is closed under composition are not simple, and reasoning about the closure of I/O-PAR under composition from a process’s traces is challenging. We illustrate this point, and then present a definition of composition that, in conjunction with EVCR, permits identifying a composite-I/O-PAR process from its trace. An I/O-PAR process is one that behaves deterministically and cyclically, by first engaging in all the events of its alphabet, then repeating this behavior. For example, consider the I/O-PAR process P, where: P = ( a £ SKIP ||| b £ SKIP ) ; P Process P’s behavior, represented by its set of all possible traces, traces(P), can be succinctly described by the following VCR-style, parallel event trace: trP = < {a, b}, {a, b}, {a, b}, … > The benefits of the VCR-style representation of P’s behavior are evident. In particular, the set of views of this trace all adhere to the definition of an I/O-PAR process. Now, suppose we wished to reason about I/O-PAR process Q, whose specification and respective parallel event trace is as follows: Q = ( b £ SKIP ||| c £ SKIP ) ; Q trQ = < {b, c}, {b, c}, {b, c}, … >
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
293
Now we have two I/O-PAR processes. It is natural to wish to compose P and Q in such a way that the composition of P and Q is also I/O-PAR (technically, composite-I/O-PAR). Notice, though, that P and Q both engage in event b, which implies that b is a synchronizing event when P and Q are composed in parallel. In particular, we’d like to be able to compose P and Q in such a way that the traces(P || Q) could be described by tr1, expressed as a VCR-style parallel event trace: tr1 = < {a, b, c}, {a, b, c}, {a, b, c}, … > Such a trace clearly reflects an I/O-PAR process, and traces like tr1 are possible; but the problem is that under existing parallel composition in CSP, other traces are also possible. It is these other possible traces that make the composition of processes P and Q difficult to recognize as being I/O-PAR. The existence of these traces also provides some intuition into the difficulty in proving I/O-PAR processes are closed under composition. Suppose process P completes one iteration of its events, then recurses before process Q can engage in all of its events. At this point, process Q is still waiting to engage in its first event c, even though process P has engaged in its second event a. However, process P can’t get any further ahead of Q, because it’s waiting for the synchronizing event b, which won’t occur until process Q first engages in its first event c, then recurses. Now suppose Q “catches up” to P, by engaging in c and recursing. At this point, Q may engage in either event of its alphabet, since process P is already waiting to engage in the synchronizing event b. Suppose processes P and Q engage in synchronizing event b, then c. At this point, both processes have completed two cycles of engaging in all the events in their respective alphabets, which results in the following CSP-style trace: tr2 = < a, b, a, c, b, c, … > Trace tr2 isn’t readily recognizable as I/O-PAR, but Martin proved that such a trace of a composite-I/O-PAR process is permissible, and still qualifies as I/O-PAR. However, it is easy to imagine that a few more processes and larger alphabets would make such compositions difficult to recognize as being I/O-PAR. Even the parallel event traces of VCR do not provide much help in this regard: tr3 = < {a, b}, {a, c}, {b, c}, … > Trace tr3 no longer emulates the I/O-PAR design pattern behavior as shown in tr1. In particular, it’s possible to observe two a’s by the time the first c is observed. Strictly speaking, for this process to be I/O-PAR, events a, b, and c must each occur once before any of a, b, or c occurs for a second time. However, Martin proved that a limited amount of “slush” is possible, without losing the benefits of deadlock-freedom. In this sense, the process that is capable of producing trace tr2 is I/O-PAR “enough.” An interesting question is what type of composition could preserve the I/O-PAR property as evident by such a composite-I/O-PAR process’s traces? We describe informally how to compose such a process, and in so doing, introduce a new type of composition. The goal of such a composition is to result in a process whose behaviour matches process R0: R0 = ( a £ SKIP ||| b £ SKIP ||| c £ SKIP ) ; R0 Process R0 does not reflect the composition of processes P and Q, however. But consider process R1, which is a little closer to the desired composition: R1 = ( a £ SKIP ||| b £ SKIP ) |{b}| ( b £ SKIP ||| c £ SKIP ) ; R1
294
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
Process R1 is a little closer in spirit to achieving our goal, since its definition reflects the composition of the definitions of P and Q (without the recursive invocations of P and Q). But if we are to define a type of composition for processes, we must be able to treat the processes as black boxes, or at least not resort to changing the definitions (bodies) of the processes being composed. Consider the naïve composition: R2 = P |{b}| Q This is the standard CSP parallel alphabetized composition, and this is the composition that results in traces like tr1 and tr2. Processes R0 and R1 are refinements of each other, because they both produce the same set of traces, tr1. Furthermore, processes R0 and R1 both represent a refinement of R2, since R2 is capable of producing all the traces of R0 and R1 , as well as other possible traces (the traces with some limited “slush”). How then can we compose processes P and Q so that only traces like tr1 are possible? If we could solve this problem, we will have defined a type of composition that preserves the I/O-PAR property. We begin by renaming processes P and Q to P’ and Q’, respectively. That is, P’ and Q’, are bound to the original definitions of P and Q. In particular, the recursions still refer to the original process names, P and Q, within the bodies of P’ and Q’. Our goal is to remove the “slush,” by synchronizing the recursive calls of the I/O-PAR processes being composed. So far, we have: P’ = ( a £ SKIP ||| b £ SKIP ) ; P Q’ = ( b £ SKIP ||| c £ SKIP ) ; Q Notice that P and Q are no longer recursive calls, since we renamed these processes. We therefore need to provide new definitions for P and Q. The new definitions for P and Q are trivial and recursively symmetrical: P = x £ P’ (where the synchronizing event x is not in the alphabets of P or Q) Q = x £ Q’ In this way, processes P’ and Q’ must synchronize on x before recursing. Now let us define the composite-I/O-PAR process R3 to be the alphabetized parallel composition of P’ and Q’: R3 = ( P’ |{b, x}| Q’ ) \ {x} which specifies the parallel composition of processes P’ and Q’, where events b and x are synchronizing events. Furthermore, we are hiding event x from appearing in the traces(R). The resulting traces(R), expressed in the VCR style, now look like tr1, which is recognizably I/O-PAR. It is easy to see that this composition nests, i.e., it extends to three or more processes. We have exploited renaming to achieve our goal, though it is a renaming of processes, rather than events, which is somewhat unusual in CSP. Still, this approach is not entirely unrelated to the approach used to create multiple instances of a process, each with its own unique instance of the events it engages in. The renaming of processes models what is possible on a real computer, e.g., renaming the games “rogue” or “hack” to “vi” on a Unix system (to appear to one’s boss, or advisor, that you are hard at work!); or even renaming a compiled C program from a.out to a more meaningful executable name. The renaming of
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
295
the original processes being composed allowed us to introduce new definitions for the original process names, and thus “insert” a synchronizing event into the recursion of the processes being composed, without modifying the contents of the black boxes. We haven’t named this type of composition yet, but we know enough about it to describe its properties. Since processes P and P’, and Q and Q’, are mutually recursive and synchronizing, this type of composition is a synchronized, mutually recursive renaming composition. But it seems more appropriate to name this type of composition, I/O-PAR composition. A more thorough and formal treatment of I/O-PAR composition remains to be done, including composition of three or more processes, and a formal proof of refinement, showing that process R3 refines R0 (and vice-versa). In addition, we did not address I/OSEQ, another important design pattern used in addition to I/O-PAR to ensure deadlock freedom. 5. Conclusions and Future Work We continued the development of extended view-centric reasoning (EVCR) presented in [3] with compositions of concurrent processes. The new definitions of events, and the many ways events may coincide with other events, borrow abstractions from Grid Automata [2], and form the basis for EVCR as a model that provides abstractions needed for reasoning about properties of modern computational systems. In particular, EVCR provides for the possibility of events A and B to overlap in varying degrees, instead of the all-or-nothing simultaneity of VCR’s (and CSP’s) instantaneous events. This added dimension of event duration makes EVCR a more natural model for reasoning about Grid Automata and concurrent processes, which must take time and location into account. We constructed compositions of processes on a system level and studied several kinds of compositions. Doing this, we have only laid the foundation for a model that holds much promise. To support this claim, we considered a case study of I/O-PAR composition, and demonstrated that EVCR provides the basis for a much simpler proof of the compositionality of I/O-PAR processes. A formal statement and proof remain to be shown. The next step in this direction will be to continue to build a comprehensive model of process composition. Another prospective direction for the further development of EVCR is the automation of commonsense reasoning, a long goal of the field of artificial intelligence. EVCR provides a flexible base for enhancing the event calculus, which is used as an effective technique for commonsense reasoning [6, 10]. Introduced constructions, defined concepts, and obtained results are steps toward the development of a fundamental, adequate, and efficient theory of compositions of systems and processes. Such a theory has to be developed on three levels. The first, organizational or algorithmic level would deal with organization of connections and interactions between composed systems and processes. The second, algebraic level would expand over algebras of compositions. There are two types of such algebras: algebras of systems (processes), in which compositions play the role of operations, and algebras of compositions themselves, in which compositions play the role of elements. These two levels are necessary for the development of reliable, safe, and efficient hardware and software systems from welldesigned components. The third, logical level of the theory of compositions of systems and processes would provide means for reasoning about composed systems and processes. This is necessary for assurance and verification of compounded systems.
296
M. Burgin and M.L. Smith / Compositions of Concurrent Processes
Acknowledgements The authors are very grateful to the anonymous CPA referees for their diligent and constructive feedback on our original submission and all the subsequent revisions. Their work is greatly valued. References [1] M. Burgin, Elements of the System Theory of Time. LANL, Preprint in Physics 0207055, 2002, 21 p. (electronic edition: http://arXiv.org). [2] M. Burgin, Super-recursive Algorithms. Springer, New York, 2005. [3] M. Burgin and M.L. Smith, From Sequential Processes to Grid Computation. In Proceedings of the 2006 International Conference on Foundations of Computer Science (FCS'06), Edited by Hamid R. Arabnia and Rose Joshua, CSREA Press, Las Vegas, Nevada, USA, 26-29 June 2006. [4] C.A.R. Hoare, Communicating Sequential Processes. Prentice Hall International Series in Computer Science. UK: Prentice-Hall International, UK, Ltd., 1985. [5] J.E. Hopcroft, R. Motwani, and J.D. Ullman, Introduction to Automata Theory, Languages, and Computation. Addison Wesley, Boston/San Francisco/New York, 2001. [6] R. Kowalski, and M. J. Sergot, (1986) A logic-based calculus of events. New Generation Computing, v. 4, pp. 67-95. [7] W. Li, S.Ma, Y. Sui, and K. Xu, (2001) A Logical Framework for Convergent Infinite Computations. Preprint cs.LO/0105020 (electronic edition: http://arXiv.org). [8] J.M.R. Martin, I. East, and S, Jassim. (1994) Design Rules for Deadlock Freedom. Transputer Communications, 3(2):121–133, John Wiley and Sons [9] J.M.R. Martin, and P.H. Welch, (1996) A Design Strategy for Deadlock-Free Concurrent Systems. Transputer Communications, 3(4):215–232, John Wiley and Sons [10] E.T. Mueller, Commonsense reasoning. San Francisco, Morgan Kaufmann, 2006. [11] P.G. Neumann, (2006) Risks relating to System Compositions. Communications of the ACM, v. 49, No. 1, p. 128. [12] M.O. Rabin, (1969) Decidability of Second-order Theories and Automata on Infinite Trees. Transactions of the AMS, v. 141, pp. 1-35. [13] M.L. Smith, View-Centric Reasoning about Parallel and Distributed Computation. PhD thesis, University of Central Florida, Orlando, FL 32816-2362, December 2000. [14] M.Y. Vardi and P. Wolper, (1994) Reasoning about Infinite Computations. Information and Computation, v. 115, No.1, pp. 1—37. [15] P.H. Welch, (1989) Emulating Digital Logic using Transputer Networks (Very High Parallelism = Simplicity = Performance). International Journal of Parallel Computing, 9, North-Holland. [16] P.H. Welch, G.R.R. Justo, and C.J. Willcock, Higher-Level Paradigms for Deadlock-Free HighPerformance Systems. In R. Grebe, J. Hektor, S.C. Hilton, M.R. Jane, and P.H.Welch, editors, Transputer Applications and Systems ’93, Proceedings of the 1993 World Transputer Congress, volume 2, pages 981–1004, Aachen, Germany, September 1993. IOS Press, Netherlands.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
297
Software Specification Refinement and Verification Method with I-Mathic Studio Gerald H. HILDERINK Imtech ICT Technical Systems, P.O.Box 7111, 5605 JC, Eindhoven, The Netherlands
[email protected] Abstract. A software design usually manifests a composition of software specifications. It consists of hierarchies of black box and white box specifications which are subject to refinement verification. Refinement verification is a modelchecking process that proves the correctness of software specifications using formal methods. Although this is a powerful tool for developing reliable and robust software, the applied mathematics causes a serious gap between academics and software engineers. I-Mathic comprehends a software specification refinement and verification method and a supporting toolset, which aims at eliminating the gap through hiding the applied mathematics by practical modelling concepts. The model-checker FDR is used for refinement verification and detecting deadlocks and livelocks in software specifications. We have improved the method by incorporating CSP programming concepts into the specification language. These concepts make the method suitable for a broader class of safety-critical concurrent systems. The improved I-Mathic is illustrated in this paper.
Introduction The proposed method is about formulating software specifications of components in a program. Software specifications aim at describing the desired behaviour of the software. Software specifications need to be as precise and complete as possible and should eliminates doubts about possible misbehaviours or uncertainties of the program. Incomplete software specifications in, for example, an embedded computer system can cause surprises that may have disastrous effects on its environment; e.g. a software crash in an airplane or rocket could risk people’s lives, or defects in consumer electronics products due to a software bug could cause an economical disaster. We need tools and methods to get software specifications right and desirably the first time right. A practical software specification method has been described by Broadfoot [1]. The method combines two formal methods, (1) the Cleanroom Software Engineering method [4] and (2) the CSP theory [3; 5]. The method allows for a precisely transformation of informal customer requirements into formal software specifications suitable for refinement verification by model-checking. The method has been developed on behalf of Imtech ICT Technical Systems in the Netherlands and it has been successfully applied to high-tech products at several technology leading companies, such as Assembléon and Philips Applied Technologies in the Netherlands. The method is called I-Mathic. The supporting toolset is called I-Mathic Studio. After years of experience, we conclude that the method is suitable for relatively small controller components. This is because the method was originally based on a sequencebased modelling approach where by sequence enumerations can (easily) result in complex finite state machines. Nowadays, controller components in embedded systems get bigger
298
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
and bigger due to a growing number of features they must perform. In order to overcome this problem, we investigated the use of CSP programming concepts to enrich the specification language and toolset with notion of concurrency. In general, concurrency is an important concept for dealing with the complexity of systems [2]. The result is an improved I-Mathic that is suitable for describing software specifications being scalable with the nature of complexity of computer systems. The new I-Mathic enhancements are introduced in this paper. 1.
I-Mathic Modelling Approach
A complete composition of software specifications should describe the entire behaviour and operational structure of the program. It consists of hierarchies of abstract and concrete specifications which are subject to refinement verification. I-Mathic embraces a software specification refinement and verification method, which applies formal methods to precisely describe, analyse and connect separate software specifications in a systematic way. The I-Mathic method provides a practical and powerful approach for describing and verifying software specifications in an abstract, compositional, and provable manner. 1.1. Software Specifications and Requirements A software specification of a software component manifests (a part of) the customer functional requirements or a particular part of the software design. A software specification is shaped such that it is suitable for software design. A software specification is expected to be a precise and complete description of the component and its interface. The refinement and verification method assists the user in discovering and eliminating undefined issues in the customer requirements. The method helps with identifying additional requirements, so-called derived requirements. An abstract specification is usually refined by a more concrete specification. In a hierarchy of specification a concrete specification can be an abstract specification of a deeper and more concrete specification. Sometimes a concrete specification is called the implementation of the abstract specification. However, at the design phase an implementation is commonly described as a (most) concrete specification. Therefore, the proposed method treats a model of specifications. The term implementation is omitted until the implementation phase of the development process. The abstract and concrete specifications, as a result of the proposed method, are computational and therefore they can be translated into code useful for simulation or model-checking. Of course, those specifications that are deterministic with stable states are useful to be translated into source code for program execution—these leaf specifications are the building blocks of the final program. 1.2. Applied Formal Methods I-Mathic integrates four formal methods which provide a systematic and mathematical foundation. Each formal method takes care of a responsibility in de specification modelling process, as discussed below. I-Mathic Studio hides the mathematical foundation from the software engineer. This foundation is used by the toolset to provide the software engineer
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
299
with automation and assistance during the modelling process. This results in consistent and complete software specifications. The following four1 formal methods are incorporated: x
Box Structured Development Method. The box structured development method outlines a hierarchy of concerns by a box structure, which allows for divide, conquer and connect software specifications. The box structured development method originates from Cleanroom. Cleanroom defines three views of the software system, referred to as the black box, state box and clear box views. An initial black box is refined into a state box and then into a clear box. A clear box can be further refined into one or more black boxes or it closes a hierarchical branch as a leave box providing a control structure. This hierarchy of views allows for a stepwise refinement and verification as each view is derived from the previous. The clear box is verified for equivalence against its state box, and the state box against its black box. The box structure should specify all requirements for the component, so that no further specification is logically required to complete the component. We have slightly altered the box structured development method to make it more beneficial. This is discussed in Section 2.2. x Sequence-Based Specification Method. The sequence-based specification method also originates from Cleanroom. The sequence-based specification method describes the causality between stimuli and responses using a sequence enumeration table. Sequence enumerations describe the responses of a process after accepting a history of stimuli. Every mapping of an input sequence to a response is justified by explicit reference to the informal specifications. The sequence-based specification method is applied to the black box and state box specifications. Each sequence enumeration can be tagged with a requirement reference. The tagged requirement maps a stimuli-response causality of the system to the customer or derived requirements. x CSP/FDR framework. CSP stands for Communicating Sequential Processes [3; 5], which is a theory of programming. CSP is a process algebra comprising mathematical notations based on operators for describing patterns of parallel interactions. FDR is a model-checker for CSP. The CSP/FDR framework is used for formal verification. Box structures and sequence enumerations are translated to CSP algebra (in a machine-readable CSP format) which is input for FDR. A box structure of software specifications provides a hierarchy of refinement relationships between abstract and concrete specifications. These pairs of specifications can be verified for completeness and correctness using refinement checking. FDR also detects pathological problems in specifications, such as deadlock, livelock and race conditions. x CSP Programming Concepts. The CSP programming concepts implement a subset of the CSP theory by software engineering constructs. These pragmatic constructs capture concurrency at a high level of abstraction (far beyond multithreading) and hide the CSP mathematics from the user. These CSP programming concepts can be build into a programming language (e.g. occam, Ada, Limbo, Handel-C, HASTE), provided by libraries (e.g. CP, JCSP, C++CSP, CTJ), or being part of a specification language (e.g. gCSP, I-Mathic). These concurrency concepts are often seen as a welcome contribution to the user’s knowledge of sequential programming and modelling. These concepts can also be used for programming hardware. 1
The method described by Broadfoot [1] includes only the first three formal methods.
300
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
The black box, state box and clear box views are distinct usage perspectives which are effective in defining the behaviour of individual components but they provide little in the way of compositionality. Combining specifications cannot make statements about the behaviour as a whole [1]. Cleanroom provides no means of describing concurrent (dynamic) behaviours and analysing pathological problems, such as deadlock, livelocks, race conditions and so on. In Cleanroom, notion of concurrency was occasionally mentioned as a solution to deal with describing control structures, but no sound semantics was given. Since Cleanroom is sequence-based, concurrency adds a new dimension to the complexity of the sequencebased specifications. We argue that the use of concurrency should make the specifications of complex systems simpler and natural rather than more complex and artificial. In fact, concurrency provides compositionality. Sound semantics of concurrency is given by CSP. The lack of concurrency and compositionality in Cleanroom can be solved by applying CSP programming concepts to box-structures and sequence enumerations. This is why the CSP programming concepts are integrated in I-Mathic, which affect the Cleanroom box structured development method and the sequence-based specification method. The CSP programming concepts and the CSP theory are considered two different formal methods with respect to the different contexts they are intended for. 1.3. I-Mathic Supporting Toolset – I-Mathic Studio The I-Mathic method is supported by a software development toolset, which is called IMathic Studio. Figure 1 shows a screenshot of the graphical user interface. The tree view on the left side is used for navigation and adding/removing/renaming specification elements. The tree view is divided in three sections: service interfaces, process classes and definitions. Service interfaces and process classes are discussed in Sections 2.2.1. The definitions are global enumeration types and object types that are used by the elements. The panel on the right side is used to model the specifications or diagrams. The example in Section 4 shows more screenshots.
Figure 1. I-Mathic Studio graphical user interface.
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
301
The toolset assists the user with the specification process and it automates the following features: x Create/load, edit, refactor, and save software specifications, x Navigation between the software specifications, x Specifying relationships between software specifications, such as equivalence relationships and communication relationships. x Transformation of the software specifications to machine-readable CSP for input for the FDR model-checker, x Depicting software specifications by state transition diagrams or process diagrams, x Code generation to, for example, C++, x Document generation, e.g. derived requirements overview. The code generator is customizable to the customer’s coding standards and programming language. Most specification elements can be documented separately and tagged with a requirement reference. This allows for requirements traceability. The software specifications are saved in XML format, which can be used by other software design tools. I-Mathic Studio is evolving with additional features, which are not mentioned in this paper. 2.
Process-oriented Software Specification Development
2.1. Concurrency The inputs and outputs of a computer system interact with a concurrent world. The inputs (stimuli) and outputs (responses) of a program synchronize with the outside world and that perform in sequence, in parallel or by some choice. It is important to be able to describe the behavioural relationships between the concurrent inputs and outputs. Not surprisingly, describing parallel inputs and outputs by a single sequence enumeration is far too complex. Therefore, the software specification refinement and verification method should encompass concurrent concepts. Concurrent concepts are provided by the CSP theory. Describing the behaviour of a complex process with sequential enumerations can easily result in a complex description. The vast number of states and transitions makes reasoning and modelling difficult. Describing concurrency aspects (e.g. synchronization, parallelism, non-determinism, and timing) in a finite state machine (being inherently sequential) will make it even harder or perhaps impossible. Therefore, concurrency concepts are required in order to manage the complexity of components. Therefore a software specification should be able to describe a composition of smaller and simpler specifications. A software specification can be described by a single process or by a network of parallel processes. A network of processes is a process itself (networks of networks). Both single process and network descriptions can reflect deterministic and nondeterministic behaviours. Non-determinism is a natural phenomenon in a system’s behaviour, which should not be ignored. The granularity of (non-)determinism depends on the abstraction of the process. The ability to describe non-determinism can make a specification shorter, more natural, and more abstract. The interface of a process reveals the behaviour of interaction via its ports.
302
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
2.2. Box Structured Development Method in the light of processes In Cleanroom, the box structured development method defines three views of the software system, known as black box, state box and clear box views. A box structure consists of usage components (boxes) in the software development method which results in a complete mapping between the customer requirements and the structural operations of the resulting software. However, the sequential nature of the sequence-based specification method and the lack of composing behaviours limit the scalability of the complexity. Hence, Cleanroom is useful for small components. We argue that box structures in the light of communicating processes will eliminate this limitation. In I-Mathic, a box represents a process. A process performs a life on its own and it is completely in control of its own data and operations which are encapsulated by the process. A process can play the role of both a usage component and a building block of the system architecture. I-Mathic enables a straightforward mapping between a software specification and its implementation. 2.2.1. Black Box and White Box Specifications Using the Cleanroom’s box structures, we experienced a strong coupling between a black box view and state box view, which results in a single software specification. It is more efficient to define an abstract specification and a concrete specification for respectively the outside and inside of a process. Therefore, two abstract views of a process are defined which refer to as the black box and white box views of a process. These two views replace the three views in Cleanroom. A black box view describes an abstract specification and a white box view describes a concrete specification of a process, respectively called a black box specification and a white box specification. A black box specification treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black box specification focuses on the functional requirements. White box specification peeks inside the "box", and it focuses specifically on using internal knowledge (design) of the process. The white box specification describes a software specification at a lower level in hierarchy. It does not describe the implementation of the process. The black box and white box views/specifications can be composed of concurrent sub-specifications. The black box and white box specifications of a process share the same process interface (or a subset). Both the black box and white box specifications can be described by a sequence enumeration (Section 2.3) or by a network of processes. A black box specification can be created at any time. A black-box specification is only necessary when a process design must be tested against the requirements. This is usually the top process, but it can be necessary to specify black box specifications at other levels in the design of which the requirements pass judgment. The specifications at the leaf ends of the box structure are typically black box specifications. They describe the behaviour of custom or third-party code. The black box and white box specifications of a process share the same process interface. Both specifications share a set (all or a subset) of ports of the process interface. The white box specification of a process can be a refinement of one or more back box specifications (if given). Each black box specification describes a particular client view of the process. If a process provides both black box and white box specifications then both behaviours, with respect to the shared set of ports, must be equal! This process-oriented box structured development method is effective in detecting and identifying defects in the customer requirements. Furthermore, the composition of processes renders a communicating process architecture, which organizes structural and functional decomposition.
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
303
2.2.2. Process-Oriented and Service-Oriented Design Two types of classifications define the process-oriented and service-oriented design aspects of software specification, namely: x Process Classes. A process is an instance of its process class. Process classes define the type of the processes. A process class defines a process interface and describes the behaviour of the interface. Two types of process classes are distinguished: (1) sequence enumeration process classes and (2) network process classes. x Service Interfaces. A service interface is a collection of services a process can offer. A process class must implement a service interface in order to perform its services. Services are implemented as methods. Service interfaces define the type of the ports and channels. A process (that implement the service interface). A process class defines public or private methods. Public methods implement services they can accept via their input ports. Private methods can be invoked by the sequence enumeration table, but are invisible to other processes. Methods are described by subsequence enumeration tables, which create a hierarchical finite state machine. Processes specify distinguishable behaviours at a high level of abstraction—higher than objects. Hence, software specifications are described by non-objects, namely: x processes that provide the building blocks of a software architecture, which compose functional tasks, responsibilities and concerns, x services that processes can offer, x ports that specify the process interface, x channels that connect processes via ports, x and sequence enumerations that describe the protocol of interaction in terms of events, states, decisions and transitions. The communication relationships between processes are defined by channels that connect the process interfaces. See Figure 2. The communication relationship determines the integrity and consistency of a network of software specifications. channels output port
input port process interface in :ServiceInterface1 s:Process_S
p:Process_P
{Start, Pause, Abort, ...} {...} services
out :ServiceInterface2
q:Process_Q
{...}
{Paused, Aborted, ErrorOccurred,...}
outside, black box, or abstract view/specification
inside, white box, or concrete view/specification
r:Process_R
Figure 2. Box Structure of a single process consisting of multiple inner processes.
A process is identified by a process name and a process class. The process class defines the type of the process; i.e. the process interface and the software specification.
304
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
The ports are identified by a port name, service interface and a direction. The service interface and the direction define the port type. The services a process can offer are bound by the set of input ports. Each service interface defines a view or group of services a client process can request. Input ports can be connected output ports, or an input/output port can be connected to its parent input/output port. In either ways, the port types must be equal. The connections between processes are channel. Channels take care of synchronisation, process scheduling and data transfer. Channels are initially unbuffered and perform handshaking between interacting processes. Buffered channels are optional in circumstances of structural performance bottlenecks [2]. 2.3. Sequence Enumerations in the light of processes A sequence enumeration describes the possible sequences of stimuli, responses and states. The stimuli involve the inputs and the responses involve the outputs of a process. A stimulus is an incoming event, i.e. it accepts and receives a message. The stimulus triggers a transition from one state to another state. The transition performs a response before the next state. A response is an outgoing event, i.e. sending a message. The states, internal actions and the state transitions are completely encapsulated by the process. Table 1 illustrates a sequence enumeration table. The layout of a sequence enumeration table consists of states and a number of transitions (entries) per state. A transition describes a branch which is part of a sequence enumeration. A transition is ready to be executed when the condition and accept fields are both true. A sequence enumeration waits in a current state until at least one transition becomes ready. In case more than one transition is ready then one of these transitions will be randomly2 selected and executed. On selection, the accept and action fields are executed and then a next state is taken. Table 1. Sequence Enumeration Table Layout. state 1 condition
accept
action
next state
description
ref.
… … …
… … …
… … …
… … …
… … …
… … …
state 2 condition
accept
action
next state
description
ref.
… … …
… … …
… … …
… … …
… … …
… … …
The fields are described as follows: x Condition field. The condition is a Boolean expression using state variables and Boolean operators (currently only ‘==’ and ‘!’). x Accept field. The accept field specifies a stimulus, i.e. an input port and a service, which is ready when another process is willing to request the service via the port. If the condition is true, the accept field is ready and the transition is selected then the service is accepted. The service is executed by the process as part of the acceptance.
2
The semantics of the choice construct is derived from the external choice construct in CSP.
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
305
x Action field. The action field specifies zero, one or more sequential actions. The actions are restricted to sending responses and updating internal state variables. This field is like a small code body with a C-like syntax. A response is described by an output port and a service. A response is a request of a service via the output port to a server process. The response only is executed by the server process on acceptance by the server process. x Next State field. The sequence enumeration takes the next state immediately after the action field has terminated x Description field. A transition can be described separately. This field is optional. x Reference field. The reference field specifies a requirement tag, which enables requirement traceability. A transition can refer to a customer requirement or a derived requirement. This field is optional. The sequence enumeration tables define the client-server relationships between processes. The accept field defines the server role of the process and the response in the action field defines the client role of the process. Examples of sequence enumeration tables are given in Section 4. 3.
Refinement Verification
A specification that is equivalent to a simpler specification can be replaced by the simpler one. Let process P be a complex white box specification and process Q a black box specification. Process P was developed from a design perspective and process Q from a requirement perspective. The processes P and Q must share the same interface; i.e. refinement is only possible if the number of ports, their types and order are equal. FDR will check if P is trace-failure-divergence compatible with Q, i.e. the future stimuli and responses of P and Q must be the same. In case they are not trace-failure-divergence equivalent, P is not a refinement of Q. This indicates incomplete requirements or a mapping failure between the informal and the formal specifications. In this case, iterations with the customer are required that must clarify and solve the mismatch. Abstract software specifications usually represent the requirements, which are called predicates. A predicate specifies properties of a process that must hold. The refinement check in CSP is given by: Spec’ Ҳ Spec
Spec’ is the predicate being asserted and Spec is tested against Spec’. Spec passes just when all its behaviours are one of Spec’. In other words, the behaviours of Spec’ include all those of Spec. Spec is a refinement of Spec’. The refinement-check considers all the process’ behaviours, namely traces, failures and divergences. In case Spec’ and Spec are both predicates, their behaviours must be equivalent, so: Spec’ Ҳ Spec and Spec ҲSpec’
should hold. This is called the equivalence relationship between two specifications: P is equivalent to Q
306
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
If P is equivalent to Q and Q is a simpler description then Q can be used instead of P for further analysis. This simplifies and accelerates the refinement verification in the hierarchy of specifications. In I-Mathic, a specification can be verified for equivalence with another specification; no matter if this is a black box or white box specification. Verification is possible as long as the process interfaces are compatible. 4.
Example
An example of a simple system is illustrated in this section. We do not have the space to discuss the system in detail. The exact meaning of the processes, their detailed specifications and the requirements are omitted. The example illustrates the look-and-feel of the I-Mathic method and I-Mathic Studio. A complete illustration of I-Mathic is provided by an I-Mathic course at Imtech ICT. The example comprehends the interaction between a user and an embedded system. The user interacts with the system via a graphical user interface (GUI). The embedded system is a system controller consisting of three subsystems; a supervisory controller and two peripheral controllers. See Figure 3. The kernel controller performs the supervisory control over the transport and pick & place controllers. The transport and pick & place controllers contain other subsystems. These subsystems are omitted in this paper. Each controller is a process that is connected via channels; see the arrows in the Figure. The focus of this example is on the embedded system and not on the GUI. GUI System Controller Kernel Controller
Transport Controller
Pick & Place Controller
…
Figure 3. Process architecture of controllers.
The process names and process classes are given in Figure 4. The process class System describes a white box specification by a network of processes, since we peek inside System. The Kernel process class is the specification that needs to be developed. system : System kernel : Kernel
trans : Transport
equip : PPEquipment
Figure 4. Network of processes with identifiers.
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
307
Coincidentally, the system, trans and equip processes have the same interface and they must behave similar. Hence, they have the same black box specifications. The black box specification is defined by the Peripheral process class. We must prove that the process classes System, Transport and PPEquipment are equivalent to Peripheral. See Figure 5. system : System ł Peripheral kernel : Kernel
trans : Transport ł Peripheral
equip : PPEquipment ł Peripheral
Figure 5. Equivalence relationships specifying refinement verifications.
Since Transport and PPEquipment are customer-made components, they have no white box specification. These components needs to be tested using test cases. In this example, we assume that Transport and PPEquipement are equivalent to Peripheral. The Kernel specification must be developed such that System fulfils the functional requirements and that System is equivalent to the Peripheral specification. In case System, Transport and PPEquipment are equivalent to Peripheral then Peripheral can be used instead for further refinement verification. This optimizes the refinement verification. FDR will use less state space and the model-checking will perform faster. The Peripheral specification is given in I-Mathic Studio in Figure 6. The Peripheral process class is selected in the tree view. The expanded Peripheral process class shows the ports, services it can offer and an internal event (Error).
Figure 6. Peripheral process class in I-Mathic Studio.
308
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
The sequence enumeration table on the right side shows the input ports (and services) and internal event (spontaneous (Error)) in the accept fields for each state. The internal event can occur inside the process (like a spontaneous exception) and therefore it is not passed via the process interface. The white rows are specified transitions going to a next state. The grey rows are socalled ILLEGAL transitions. ILLEGAL transitions are supposed not to happen. The modelchecker will notify the user if there is a possibility that the process can end in an ILLEGAL state, due to an illegal service request. Thus, illegal transitions are detected during refinement verification and not at run-time. This saves time and the method guarantees that all illegal requests are detected and solved at the early specification phase of the development. For each sequence enumeration table a state transition diagram (STD) can be shown. An STD gives an overview of states, transitions and a sequential thread of control. The ILLEGAL states are not shown in an STD because this would make the STD unnecessary complex. Figure 7 shows the STD of the Peripheral specification.
Figure 7. State transition diagram of Peripheral process class.
The System process class describes a network of processes, which can be depicted by a process diagram (PD). Figure 8 shows the network of processes and connections. The System process class is clearly a white box specification. A PD can be shown at every level in the process hierarchy. A concrete specification can be tested for equivalence by specifying an abstract specification. For example Figure 9 shows the properties of the System process class. Here, Peripheral is specified as the abstract specification. Therefore, the System process class will be verified against the Peripheral process class. This is also done for the Transport and PPEquipment process classes. The Kernel process class does not have an equivalence relationship specified and therefore kernel will only be part of the System specification.
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
309
Figure 8. Process diagram of Kernel process class.
Figure 9. System should be equivalent to Peripheral.
FDR checks the model successfully. Each equivalence relationship is true and the system is deadlock free. Unfortunately, we did not succeed to make a screenshot in time of the Linux machine on which FDR runs. The results of FDR are not further discussed. 5.
Conclusions
The method allows the user to describe software specifications in terms of processes. A process can be described by a sequence enumerations table or by a network of parallel processes. This process-oriented framework allows the user to compose a complex system by simpler sub-systems. The absence of objects allows the user to reason clearly about the functional behaviour of the system and not being disturbed by the limitations of (inherently sequential) object-oriented structures. Eventually, object-oriented and multithreaded structures in the implementation are determined by the process architecture. The software specification refinement and verification method has been improved with CSP programming concepts. These concepts provide the user with practice software engineering constructs to describe concurrent behaviours in software specifications. IMathic includes sequential, parallel and choice constructs, and channels that synchronize parallel sequence enumerations (processes). These concepts are required for the compositionality and scalability of software specifications. The CSP/FDR framework allows for refinement verification between abstract (black box) and concrete (white box) software specifications. This refinement verification process
310
G.H. Hilderink / Software Specification Refinement and Verification Method with I-Mathic Studio
proves the completeness and correctness of the software specifications. Also, deadlock, livelocks and race conditions are checked. I-Mathic Studio has a graphical user interface which includes features that assist the user with the development process of software specifications. This improves the productivity and the user does not have to an expert in formal methods. 6.
Future work
The CSP programming paradigm provides I-Mathic a road map of extra new features that are useful for describing software specifications of complex and mission-critical embedded systems. A list of additional features, such as debugging in I-Mathic Studio (rather than in FDR), animation of execution, colouring deadlocks and livelocks in sequence enumeration tables and graphical modelling are planned to be implemented in I-Mathic Studio. References [1] Broadfoot, G. H. (2005). "Introducing Formal Methods into Industrial using Cleanroom and CSP." Dedicated Systems Magazine Q1. [2] Hilderink, G. H. (2005). Managing Complexity of Control Systems through Concurrency. Control Laboratory. Enschede, The Netherlands, University of Twente. [3] Hoare, C. A. R. (1985). Communicating Sequential Processes, Prentice Hall. [4] Prowell, S. J., C. J. Trammell, et al. (1998). Cleanroom Software Engineering - Technology and Process, Addison-Wesley. [5] Roscoe, A. W. (1998). The Theory and Practice of Concurrency, Prentice Hall.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
311
Video Processing in occam-pi Carl G. RITSON, Adam T. SAMPSON and Frederick R.M. BARNES Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NF, England {cr49 , ats1 , frmb}@kent.ac.uk Abstract. The occam-π language provides many novel features for concurrent software development. This paper describes a video processing framework that explores the use of these features for multimedia applications. Processes are used to encapsulate operations on video and audio streams; mobile data types are used to transfer data between them efficiently, and mobile channels allow the process network to be dynamically reconfigured at runtime. We present demonstration applications including an interactive video player. Preliminary benchmarks show that the framework has comparable overhead to multimedia systems programmed using traditional methods. Keywords. occam-pi, concurrency, process networks, video, video processing
Introduction This paper describes a video processing framework written in the occam-π language. Its design derives from the primary author’s experiences developing for and using a number of open source video processing tools [1,2] and applications [3,4]. This work shows that not only is occam-π suitable for developing whole multimedia applications, but also offers advantages for the development of individual multimedia software components. A video processing framework, or more generally a multimedia framework, is an API which facilitates the interaction and transfer of data between multimedia-handling software components. Almost all video processing systems are constructed within such frameworks, with distinct software components composed into pipelines or layers. Software components with standardised interfaces can easily be modelled using objectoriented techniques, and as a consequence most existing frameworks are written in an objectoriented style. AviSynth [1] (C++), the DirectShow Filter Graph [5] (C++), Kamaelia [6] (Python) and GStreamer [7] (C/GLib [8]) are examples of this. Communication between components is implemented either by direct calls to object (component) methods [1], or by interaction with buffers [5,7] (often called pins or pads) between components. The second approach can be directly parallelised, whereas the first requires the addition of buffers between parallel components. The method call model, without parallelism, is often preferred for interactive applications. Here a control component pushes or pulls data to or from the user, as it is easier to reason about the data currently being presented. Neither of these approaches simplifies the design process, particularly in the presence of parallelism. For example, in order to create a circular ring of filters using method calls, one component must act as the initiator to avoid the system descending into infinite recursion. It is often difficult to reason about the correctness of systems produced using such methods. The occam-π language, with semantics based on Hoare’s CSP [9,10] and Milner’s πcalculus [11] process algebras, offers a number of features which can be used to overcome these problems:
312
C.G. Ritson et al. / Video Processing in occam-pi
• The language’s formal basis makes it possible to reason about the behaviour of concurrent processes and their interactions: for example, design patterns exist which guarantee freedom from deadlock [12]. • Mobile data types allow concurrent processes to safely and efficiently pass data around by reference, whilst avoiding race-hazard and aliasing errors. • Mobile channel ends and dynamic process creation facilities allow networks of processes to be constructed, modified and destroyed at runtime. The transputer processor [13], for which the occam language was originally developed, has previously been used for multimedia [14], and occam-like languages have been developed which allow the dynamic specification of multimedia systems [15]. However, such research predates the occam-π language and thus cannot take advantage of its new features [16,17] — occam-π’s potential in this area is as yet unexplored. The framework presented in this paper uses occam channels, grouped into occam-π channel types, to provide a standardised interface between components. Mobile data provides for efficient communication within the system, and mobile channel ends allow those systems to be reconfigured dynamically at run-time. The resulting process networks are similar to the models of traditional video processing systems, but have an implementation that more closely resembles the design. Section 1 explores the development of an occam-π video player, looking at both ‘push’ and ‘pull’ modes of operation. Sections 2 and 3 examine the protocols and connectivity in detail. Issues of dynamic network reconfiguration are discussed in section 4, followed by an example application in section 5. Section 6 discusses the handling of status reporting in such dynamic networks. Initial conclusions and a discussion of future work are given in section 7. 1. ovp – The occam-π Video Player This section explores the development of the occam-π video player, ovp, in order to give an insight into ideas underpinning the framework to be presented in the rest of the paper. Readers with a particular interest in the framework’s implementation details may wish to skip ahead to section 2. 1.1. Basic Player A basic video player needs to process and output two tracks of data (audio and video) concurrently. In order to achieve this we could use the process network shown in Figure 1. The specific decoder and output types can be inferred from the initial setup, init, messages which propagate through the network. The network post-init messages might look like Figure 2. This is in effect a form of runtime typing, and as such there is no guarantee that the given network of components will function together; section 7.2 discusses this further. Data will flow through the network as the output processes consume it (at a rate dependent on timecodes embedded in the data flow). The network will act as a pipeline, with the decoders running in parallel with the output processes. This will continue until an end message is received, which will flush and shut down the network. This gives us linear playback. 1.2. User Control A more general video player will have user control in the form of non-linear playback controls and pausing. For this we need to modify the network; an initial solution is presented in Figure 3. The “User Control” process repositions the input file in response to user seek requests. The purpose of the “Flow Control” processes is less obvious. As the network buffers data, a pause or change in track position will take time to filter through from the demulti-
C.G. Ritson et al. / Video Processing in occam-pi
313
Figure 1. Process network for a simple occam-π video player.
Figure 2. Simple occam-π video player after init messages have passed through the network.
Figure 3. Process network for a seekable occam-π video player.
plexer to the outputs. This is not desirable if we want the network to appear responsive to user input. The flow controls are thus used to drain the input and decoding parts of the network during a pause or seek, and purge the outputs, meaning the user does not have to wait for the pipeline to empty before new data is presented. One significant issue with this design is that it requires the temporal position of both streams to be the same after a seek or pause, otherwise there will be skew between them. This is not something we can guarantee. After a pause we will have introduced a skew proportional to the size of any output buffers (audio output devices always have buffers). For seeking there is no guarantee that the resolution of positioning data will be the same for all tracks. The timeline resolution of video will typically be several seconds (depending on the frequency of “key frames” upon which the decoder must synchronise), and audio hundreds of milliseconds (depending on packet size). Therefore, after a seek, the tracks will most likely be out of sync.
314
C.G. Ritson et al. / Video Processing in occam-pi
This problem can be resolved by making two changes: 1. When unpausing or seeking, decide a new position for all streams and distribute this to the flow controls, which then discard data before the new position. 2. Provide synchronisation facilities in the output processes. 1.3. Output Synchronisation
Figure 4. Breakdown of an output process, showing processes added to aid synchronisation.
In order to support output synchronisation, the output processes are broken down into three parts, as shown in Figure 4. The embedded output device process acts in a pass-through manner. The device manager monitors the position of the output device using the timecodes of its output stream and delays frames appropriately using the clock process. The clock process converts the KRoC environment’s microsecond timers to nanosecond timecodes. Given the KRoC environment’s nanosecond communication times, reading the time via requests to separate processes should not lead to any significant inaccuracies, although it could be inefficient when used on a large scale. The device manager starts in an unsynchronised state, and requests that the clock synchronise when it receives a timecoded message, providing the timecode as the synchronisation point. On receipt of a purge message, the device manager resets the clock and returns to an unsynchronised state. Synchronising all the outputs of the network is now done by synchronising their respective clocks (Figure 5).
Figure 5. The occam-π video player, push-based seekable design with synchronised outputs.
Whenever a clock receives a synchronise or reset request, it forwards this to the “Clock Sync” process. The clock sync process in turn initiates a synchronisation cycle (if one is
C.G. Ritson et al. / Video Processing in occam-pi
315
not already running), which resets all other connected clocks. A clock which has been reset interrupts any pending alarms and refuses to accept new alarm requests until it has been resynchronised. Once all clocks are attempting to synchronise and have presented the clock sync process with the desired timecode, the sync process picks the earliest timecode and associates it with a point in KRoC environment time. This association is the synchronisation point and is returned to all the clocks. All clocks thus acquire the same mapping of KRoC timer offset to timecode. This process is very much like a barrier. In practice the sync process returns a synchronisation point slightly earlier than that requested, allowing some propagation time. A synchronisation mechanism similar to this could be extended to work across multiple distributed hosts, allowing the synchronisation of multiple distributed output devices – a topic for future research. 1.4. Pull Model The design ideas so far presented only need employ the framework’s stream protocol P.MM (see section 2). While these designs do work in practice, they are overly complex; as a sideeffect of the input process driving the network, changes to the flow must be applied in two places. It makes more sense to have the process receiving user requests drive the network and hence be able to respond directly to user requests. For this we can use the framework’s request/response protocols P.MM.CTL and P.MM.SEEKABLE (see section 3).
Figure 6. Process network for the complete occam-π video player, based on a pull model.
Figure 6 shows the operating process network for the complete occam-π video player, built using a request/response model. The “Play Control” process requests data from the inputs via the decoding pipelines, and passes it to the outputs, which are synchronised as previously described. The channel between the play control and the clock sync process is used to inform the clock sync process how many processes should be synchronised (some tracks may have come to an end and will not need synchronising). The seekable decoders are simply decoder processes extended to handle P.MM.CTL/SEEKABLE protocols using the seekable wrapper described in section 3.4. A flow path exists from each output back to the play control process via CT.USER.CMD channel bundles. This allows user commands input via the device, for example from an X11 window, to control playback. An advantage of this design is that, since input tracks are considered separate streams and only share the common factor of the play control process, tracks need not come from
316
C.G. Ritson et al. / Video Processing in occam-pi
the same source. This means that audio from one file could be combined with video from another file without a complex backend synchronising the input processes. A “multi-play” mode in the occam-π video player uses this feature to play the video from any number of files simultaneously, synchronised in the same way as a pair of audio and video tracks. Another advantage of the pull model is that by adding filters which intercept requests and distribute them over input sources, many separate input files could be arbitrarily combined into a single track (figure 7). This idea has not yet been implemented.
Figure 7. Possible design for a Merge process, which combines two other inputs.
2. Streaming – P.MM At the heart of the framework is a single stream protocol P.MM (Protocol MultiMedia), which carries video and audio along with untyped “packet” data and commands using a “push” data flow model. The majority of the data elements are declared mobile [16]. If mobile data types were not used then data would need to be copied between communicating processes – highly inefficient for video, where the data-rate will typically exceed 20MB/sec (for standarddefinition broadcast video). Using mobile data types, the pipeline of processes can be extended with no significant decrease in performance. PROTOCOL P.MM CASE init; MOBILE TRACK.INFO; MOBILE []BYTE packet; TID; MOBILE PKT.HDR; MOBILE []R.DESC; MOBILE []BYTE video; TID; MOBILE VF.DESC; MOBILE []BYTE audio; TID; MOBILE AF.DESC; MOBILE []BYTE flush purge skip end :
init signals the start of a stream. packet carries untyped media frames, typically compressed video or audio. video carries a single video frame. audio carries an audio frame of variable length. flush instructs the receiver to output all ready buffers, then forward the flush command. This is necessary for the ideas presented in sections 3.4 and 1. purge instructs the receiver to clear its internal state without generating any output, and to prepare for new data. The purge command is forwarded when the receiver is ready for new input. Like flush, this is used in sections 3.4 and 1. skip instructs the receiver to do nothing. Unlike flush and purge it is not forwarded. This command is used to build zero-place buffers (see section 3.5).
C.G. Ritson et al. / Video Processing in occam-pi
317
end indicates the end of the stream; no data will follow. This provides a form of graceful termination [18]. On receipt the receiver should terminate after outputting any ready buffers (like a flush) and propagating the end message, unless it is restartable (such as pluggable.stream.input.end in section 5). 2.1. P.MM Usage Contract There is as yet no language syntax for the specification of communication contracts in the occam-π language. For readability we have chosen to use a regular expression syntax for the contracts presented here. Hoare’s CSP or a derivative such as that used in Singularity [19] could also have been used. The expected sequence of messages on a channel of P.MM is as follows: skip* (init (packet|video|audio|flush|purge|skip)* )? end
In summary, this means that the only certain event is the end of the stream, which can happen without prior initialisation. The skip command is permitted before initialisation to aid in the creation of zero-place buffers (see section 3.5). Additionally, it is assumed there will not be a one-to-one mapping between input and output for processes implementing P.MM – a process may buffer as much or as little data as needed while maintaining FIFO ordering. The effect of this is that after sending an init, purge or end message to a process, it must not be assumed that the next output message will be of that type. Video encoders and decoders in particular require this form of internal buffering. 2.2. Timing – TID Each elementary type (packet, video, audio) is associated with a TID data structure (Temporal IDentifier). A TID structure describes the position of a packet or frame within the timeline of the stream. This is done via the timecode field which is an offset in nanoseconds from a fixed point, typically the beginning of the stream. Nanoseconds are employed to allow the framework to manipulate data from Matroska [20] files without loss of timing resolution; however, microseconds would be sufficient for most present media formats. A duration in milliseconds is also stored, although this has limited uses. DATA TYPE TID PACKED RECORD INT64 timecode: INT duration: :
Traditionally multimedia systems identify frames by their number in sequence from the beginning of the stream, or using SMPTE timecodes [21,22] which combine time and frame number offsets. This means that a stream is expected to have a fixed number of frames per unit time. In contrast, the framework presented here identifies frames purely based on time. There are three significant reasons for this: 1. When combining different streams together, it is more efficient to have a single common timeline to work with, rather than many sets of sample number and rate pairs
318
C.G. Ritson et al. / Video Processing in occam-pi
which must be normalised. Although at first this normalisation seems trivial, without timecode-based identification there is no way to synchronise and temporally manipulate streams without first knowing their respective sample rates – limiting the ways in which a system can dynamically adapt to new streams. 2. Any fixed sample rate system can be represented in a purely timecode-based system, assuming the timecodes have sufficient resolution and range. 3. A timecode-based system can represent streams with variable sample rates (discussed in the following subsection).
2.3. Aside on Variable-Frame-Rate Video
Although variable sample rate audio is uncommon, mixed frame rate video content is already in widespread circulation. In the production of NTSC television content and DVDs, it is common to use “pulldown” techniques to combine source material of different frame rates (typically 23.976fps for content that originated on film, and 29.970fps for content that originated on video). These mixed content streams can be represented more accurately in the digital domain by using a higher frame rate which is a common multiple of all the source rates (typically 119.880fps), and introducing “drop frames” where no actual frame data exists. This technique is, however, only applicable where there is a convenient common multiple between the frame rates. An alternative is to convert the frame rates of the source materials; however, this conversion often introduces visible artefacts. Changing the frame rate of video can trivially be done by dropping or duplicating existing frames, but this causes jerky motion; to avoid this, it is necessary to synthesise new frames by estimating the motion of objects in the video images. Techniques to do this exist, but they are computationally very expensive and do not work well on “noisy” video. A better option is to do away with the need for fixed frame rates, and just tag each frame with its corresponding time — this is variable frame rate (VFR) video. VFR avoids the need for resampling entirely, and allows the entire information content of the original video to be preserved. VFR allows the time and rate of change to be free of quantisation. Modern compressed video formats are based around coding change — there is no need to code a new frame if nothing has changed. This is typically dealt with using drop frames, so a static image becomes a constant stream of “no change” messages. With VFR, there would be no output at all, offering potential bandwidth and disk space savings. Video scenes requiring smooth motion can have high rates of change, and other scenes lower rates. As the stream is not quantised, the actual changes can be placed at the most visually pleasing points in time, allowing acceptably smooth motion at lower data rates. While existing output devices operate at a fixed frame rate, modern LCD displays are capable of running at rates far in excess of the captured rate of change in progressive video. This gives good scope for coding motion in a more visually pleasing way with less frames. It seems likely that future display devices will allow the display to be updated upon demand; they will be able to display VFR content without quantisation. We feel that VFR has clear advantages over conventional constant-frame-rate video. Sample rates themselves result from the need to interface with analogue electronics, and as the world moves toward purely digital production and delivery of media content (highdefinition television, TV over the Internet and digital end-to-end mastering), it is our expectation that variable frame rate material will become the standard. (It is worth noting that Internet streaming protocols are already beginning to support VFR content [23,24].)
C.G. Ritson et al. / Video Processing in occam-pi
319
2.4. Flexibility – TRACK.INFO The TRACK.INFO data structure is heavily based on the “Track” descriptor in the Matroska [20] media container format. The Matroska format is designed to be able to hold an arbitrary number of media tracks of any type, and thus provided much of the inspiration for our framework’s track-handling capabilities. Earlier versions of the TRACK.INFO were almost exact mirrors of the equivalent Matroska structure; however, the design has now been refined. The TRACK element of the name of this structure is itself a Matroska legacy; STREAM would be equally suitable. 3. Interactivity – P.MM.CTL/SEEKABLE The following protocols act as a request/response pair and extend the commands in the stream P.MM protocol to provide interactive facilities through a “pull” model. This pull model sacrifices full parallel processing; only a single request is outstanding at a given time. It is intended for interactive applications where filling the pipeline with data is undesirable (due to the increase in end-to-end latency that results). Pre-roll buffer processes can be added to keep the pipeline full and restore parallel processing, if so desired. 3.1. Feedback – P.MM.CTL P.MM.CTL is the request protocol. The process “pulling” data sends a single request and waits for a response. PROTOCOL P.MM.CTL CASE init next seek; INT; INT64 purge end :
init requests the TRACK.INFO structure and setup data for the track. next requests the logically next packet, video or audio frame in the stream. seek requests that the stream be repositioned to a new timecode held in the INT64. The INT value is a constant describing how to handle cases where the exact timecode requested can not be reached, which is almost inevitable. If set to SEEK.CLOSEST then the closest match will be picked. SEEK.BACKWARD requests the closest point not after the specified timecode, and SEEK.FORWARD the closest point not before the specified timecode. In a typical video player, SEEK.BACKWARD will be used for rewind, and SEEK.FORWARD for fast-forward, in order to give the behaviour that a user would expect. purge requests that the process clear all internal buffers. This request does not generate a response and is simply propagated to the preceding process. end requests that the process terminate, and that it request the same of its preceding processes. Like purge, this does not generate a response.
320
C.G. Ritson et al. / Video Processing in occam-pi
3.2. Simplified P.MM – P.MM.SEEKABLE P.MM.SEEKABLE is a simplified P.MM which carries responses. The data messages, packet, video and audio have the same meaning as in P.MM. PROTOCOL P.MM.SEEKABLE CASE init; MOBILE TRACK.INFO; MOBILE []BYTE packet; TID; MOBILE PKT.HDR; MOBILE []R.DESC; MOBILE []BYTE video; TID; MOBILE VF.DESC; MOBILE []BYTE audio; TID; MOBILE AF.DESC; MOBILE []BYTE seeked; INT64 end :
init no longer represents the start of stream; it simply describes the properties of the track and is the response to an init request. seeked is sent in response to a seek request, and carries the timecode of the new stream position. end means the end of the stream has been reached, but does not indicate that the component has terminated. 3.3. P.MM.CTL/SEEKABLE Usage Contract The expected sequence of requests on a P.MM.CTL channel is as follows: (init|next|seek|purge)* end
With the following request-response pattern: init next seek purge end
=> => => => =>
(init | end) (packet | video | audio | end) (seeked | end) () ()
3.4. Encapsulation The P.MM and P.MM.CTL/SEEKABLE protocols were carefully designed to permit the construction of a wrapper around P.MM processes which presents a P.MM.CTL/SEEKABLE interface: seekable.wrapper (Figure 8).
Figure 8. Process network for the seekable.wrapper.
1. At startup the wrapper requests an init message and other data from the preceding process, and feeds it to the wrapped process until an init message is produced. This init message is stored and used to respond to init requests from the successor process.
C.G. Ritson et al. / Video Processing in occam-pi
321
2. On a next request the wrapper makes next requests to the preceding process, feeding data to the wrapped process until output is produced. The output is forwarded to the successor. If an end response is encountered from the preceding process then the wrapped process is sent a flush message which causes it to output any available data. Should the wrapper receive a flush message from the wrapped process, it returns end to the successor. 3. On a seek request, the wrapped process is sent a purge. The wrapper waits for purge to be emitted by the wrapped process, discarding any intervening output. At the same time, the seek request is forwarded to the preceding process, and its response returned to the successor when the purge of the wrapped process is complete. This allows the successor to assume the process is ready for a new command immediately following a response. It should however be noted the performance impact of this design has not been explored. 4. A purge request is handled much like seek without any response to the successor. 5. An end request is forwarded to the preceding process and sent to the wrapped process. Once the wrapped process outputs end, then the wrapper terminates. This is the only point at which the wrapped process will receive an end message. 3.5. The zero.place.buffer The seekable wrapper, along with many other processes in the framework, makes use of the zero.place.buffer process. It is a variant of the standard occam “requester” or “prompter” design pattern, using the skip message of the P.MM protocol to poll a process’s ability to accept input. If a process accepts a skip message, it is “immediately” ready to accept any other command. Hence, the zero.place.buffer uses an explicit protocol command to interrogate a process rather than simply reporting when the process has accepted an already buffered message. A traditional “requester” will report when its (typically one-place) buffer is ready to be filled, whereas the zero.place.buffer reports when the process to which it is connected is ready. The zero.place.buffer is named for this lack of buffering. 4. Dynamism and Reconnectivity – CT.IO.CTL While a multimedia system may be statically configured at compile time, it is often more useful to be able to dynamically reconnect its components as it is running. The following protocols provide a means to interrogate a component process about its connectivity, and to “plug” and “unplug” channel ends to create connections between processes at runtime. To facilitate this, a P.MM channel is placed in a CT.MM mobile record, and a pair of P.MM.CTL and P.MM.SEEKABLE channels are placed in a CT.MM.SEEKABLE channel type [17]. This method of building channel ends is a result of the design of the occam-π type system. This dynamism allows the construction of very flexible file input processes. Until a file has been opened and its header parsed, it is not known what tracks it provides and their details; with these features, the tracks to be used can be selected at runtime, and an appropriate process network automatically constructed. While it would be possible to provide file input processes with fixed sets of channel parameters which are satisfied at runtime (these existed in earlier versions of the framework), it is more convenient to implement these on top of this generic architecture. Processes implementing this interface are expected to operate in two modes, “started” and “stopped”. A stopped process can have its connectivity interrogated and changed, whereas a started process can only be queried about its mode or stopped. It is expected that processes initially start in the stopped mode, as their connectivity is undefined.
322
C.G. Ritson et al. / Video Processing in occam-pi
4.1. P.IO.CTL.RQ P.IO.CTL.RQ is the request component of the protocol. PROTOCOL P.IO.CTL.RQ CASE inputs outputs start stop status end plug.mm.i; INT64; plug.mm.o; INT64; plug.mms.i; INT64; plug.mms.o; INT64; unplug.i; INT64 unplug.o; INT64 :
CT.MM? CT.MM! CT.MM.SEEKABLE! CT.MM.SEEKABLE?
inputs and outputs are used, respectively, to request details of the inputs and outputs of the process. start and stop are used to change the mode of the process. status queries the present mode of the process. end closes down the IO.CTL interface; the process will terminate when all of its connected streams terminate. plug.* and unplug.* messages are used to plug and unplug channel ends, with the INT64 specifying the track number. 4.2. P.IO.CTL.RE P.IO.CTL.RE is the response component. PROTOCOL P.IO.CTL.RE CASE inputs; MOBILE []INT; MOBILE []BOOL; MOBILE []TRACK.INFO outputs; MOBILE []INT; MOBILE []BOOL; MOBILE []TRACK.INFO plugged started stopped error; INT unplugged.mm.i; INT; CT.MM? unplugged.mm.o; INT; CT.MM! unplugged.mms.i; INT; CT.MM.SEEKABLE! unplugged.mms.o; INT; CT.MM.SEEKABLE? :
inputs and outputs return information describing the input and output connectivity of the process. []INT is an array of flags detailing the types of connections (stream or seekable), and whether a given connection must be plugged before the process can be started. For a process which filters data rather than originating it, the inputs and outputs will need to be plugged before the process can be started. []BOOL is an array of flags indicating whether a connection is plugged or unplugged. This is separate from the other flags as it changes during operation, where as the others typically do not.
C.G. Ritson et al. / Video Processing in occam-pi
323
[]TRACK.INFO is an array of the details of each connection. For inputs this will typically be a partial specification. An exact specification of what inputs the process will handle is not usually known in advance as most processes support various data formats. The framework currently supports only a simple form of partial specifications: unspecified values are given as zero. plugged indicates a plug request was successful. started and stopped indicate that the process has either started or stopped. error indicates an error occurred during a start, stop, unplug or query of inputs or outputs. The INT value is a constant representing the reason for the error. unplug.* messages are the response to a successful unplug request, or an unsuccessful plug request. The latter is required to prevent loss of the channel end which could not be plugged, in which case the INT value indicates why the plug failed. It is anticipated that a process should check that inputs and outputs being plugged are compatible. This is not done in the framework at present as this verification, like the partial specification of track types, is beyond the scope of this research. Ideally the specifications of outputs should be updated as inputs are plugged; however, this would require sending and receiving from the inputs, which is forbidden when the process is stopped and would likely block as the other end may also be stopped. This might potentially be solved using a two phase system of input plugging followed by output plugging. There is also a problem with the stopped and started modes of operation in that they introduce the possibility of deadlock. If we stop a process A to which another process B is attempting to communicate, then attempt to stop B, we are likely to deadlock as B may be blocked trying to communicate with A. This problem can be avoided by stopping processes in the same direction as the flow of data for P.MM, or the flow of requests for P.MM.CTL. It is, however, undesirable to have a system in which a sequence such as this must be adhered to, as it degrades the parallelism and is prone to implementation error. One possible solution to the ordering and type checking problems would be to have a negotiation phase between the processes being connected and disconnected when connections are plugged and unplugged. This would allow both ends of a connection to know the state of the other end and thus not initiate communication while it was disconnected. Type negotiation would occur during the connection phase. This model would add a significant degree of complexity which we would rather avoid. Further work is required to establish a simpler and safer model for reconnectivity than the one presented here, and to establish if wrappers like the seekable wrapper can be created for it. 5. dvplug – Digital Video Hot-plugging The dvplug application (Figure 9) is a demonstration of digital video input, an encoding process, and the reconnectivity components of the framework. It responds both to user input and external events and demonstrates a possible real world application for the reconnectivity within the framework. In the initial network, a clocked test.card source is wired through the control process (record.ctl) to the output. A dv1394.plug.detector process watches for new devices being connected to the host’s IEEE1394 [25] bus, and reports their presence to the player.spawner. On connection of a DV [26] device (such as a video camera), the player.spawner forks a set of processes to take input from the new device, decode it and remove the audio stream. Using the CT.IO.CTL protocol, it stops the pluggable.stream.input.end, disconnects
324
C.G. Ritson et al. / Video Processing in occam-pi
Figure 9. Process network for the dvplug application.
the test.card source from it and plugs in the new device’s network. It then restarts the pluggable.stream.input.end and stores the test.card source channel end for later use. While the DV input network is connected, the player.spawner ignores any messages about new devices. Using a status.tap (see section 6), it detects when the stream from the DV input network terminates (pluggable.stream.input.end does not propagate end messages). When such termination occurs the player.spawner disconnects the channel end for the terminated DV input network and reconnects the test.card source. Then after restarting the network it returns to monitoring events from the dv1394.plug.detector. In parallel to this, the record.ctl process responds to record commands from the user, and transfers data from the pluggable.stream.input.end to the output. On receipt of a record command, it forks a recording network, and starts copying any received messages to it in addition to the regular output. When the user requests the end of recording, the recording network is sent an end message, and the channel end to it is discarded. 6. Status Reporting Backend Every distinct component developed for the framework takes a SHARED CT.STATUS! channel end as a parameter. This provides a means for processes to output debugging information, errors and other events in a safe and uniform manner. The channel end is manipulated using a set of “status.” prefixed processes. The status backend is implemented as a automatically growing N-way tree. Node creation requests are passed to the root which forks off the new status.node processes before passing connections to them back up the tree where they are “wired in”. As messages pass down the tree to the root, they pick up the tags of all the nodes they pass through. By feeding the output of the root to a terminal it is easy to monitor the execution of the application network (particularly if status.debug calls are well placed). An example network instantiation is shown in Figure 10. Another method for implementing a similar backend would be to use a flat structure with a root process which grows and shrinks an array of channel ends as nodes are created and destroyed. However such an approach would be inefficient as a result of resizing and ALTing over the root array. It would also not be able to tag messages in a tree fashion as the presented implementation does, and would complicate process-to-process monitoring. Other than user monitoring of the application, the status network can be used by processes within the network to monitor each other. This is done by inserting status.tap processes which copy, to an extra channel, a given subset of the messages passing through them.
C.G. Ritson et al. / Video Processing in occam-pi
325
Figure 10. Example instantiation of the status backend.
This mechanism is used in the dvplug application (see section 5) to detect termination of DV input networks, using an EOS event. At present the tags within the backend are the strings passed to status.startup; this could lead to name collisions in large networks. However as status.node forking is done at the root, there should be no problem allocating each node a unique number as it is forked. This could be used in place of or in addition to the textual name – an area for improvement in future versions of the framework. 7. Conclusions and Future Work 7.1. Benchmarks Preliminary profiling of the occam-π Video Player (section 1) using a kernel-based profiler [27] suggests that the framework presents no significant overhead. Framework code, “ovp” in table 1, accounts for only 5% of execution time of the occam-π Video Player. The remainder is spent in external libraries, decoding (“libavcodec”) and copying data (“libc”). Component libavcodec.so.51.5.0 libc-2.3.5.so ovp libpthread-0.10.so libX11.so.6.2 ld-2.3.5.so libXv.so.1.0 libXext.so.6.4 libfaad.so.0.0.0
CPU samples 167526 35507 11264 1461 160 153 139 26 1
% CPU time 77.4733 16.4204 5.2091 0.6756 0.0740 0.0708 0.0643 0.0120 4.6e-04
Table 1. Profiling data for the occam-π Video Player (ovp).
Table 2 shows the profiling data obtained from MPlayer [3,28], a popular open-source video player. As the results show, the number of CPU cycles used by the framework is directly comparable to that used by MPlayer’s core.
326
C.G. Ritson et al. / Video Processing in occam-pi Component libavcodec.so.51.5.0 mplayer libc-2.3.5.so ld-2.3.5.so libpthread-0.10.so libX11.so.6.2 libXv.so.1.0 libXext.so.6.4 libXcursor.so.1.0.2 libdl-2.3.5.so libICE.so.6.3 libfreetype.so.6.3.7
CPU samples 156290 11382 2182 1044 379 203 60 47 3 1 1 1
% CPU time 91.0818 6.6331 1.2716 0.6084 0.2209 0.1183 0.0350 0.0274 0.0017 5.8e-04 5.8e-04 5.8e-04
Table 2. Profiling data for MPlayer.
The two differ in that MPlayer is able to decode compressed audio and video directly into its output buffers, whereas ovp must do a single copy operation at output time which accounts for 15% of its execution time (“libc” in table 1). This is a limitation of the current implementation, and could be solved by providing a mechanism for buffer recycling. 7.2. Future Work As the value of any particular framework lies in the functionality it provides, one clear expansion of this work would be to develop more software components. The present framework is lacking any significant filtering components, so adding simple video operations (crop, scale, merge, split) would be a logical next step. Following this it would be interesting to explore writing spatial and temporal noise reduction filters, using the occam-π language, as these are common operations in broadcast systems and could take advantage of internal parallelism. For video surveillance applications, it would be useful to provide processes for monitoring sets of video streams and reporting changes. Processes of this kind could also be used in the control of robots and other embedded devices. Extending the input and output capabilities of the framework is also an interesting area; in particular, replacing the C components of the avi.input and mkv.input processes with occam-π. These components were largely written in C as the language has better facilities for handling complex data structures than the present occam-π. However, an attempt at an occam-π implementation could inform the future development of the language’s data structure facilities. One area left largely unsolved in the present framework design is safe reconnectivity. Reconnections need to be made both type-safe (in a high-level sense, ensuring data formats are the same) and deadlock-free. Although this safety is not guaranteed for static networks, the issue is mitigated by runtime checks. These are insufficient when a component’s input can potentially change format mid-stream (as a result of a reconnection). Deadlock is possible in the current reconnectivity model if processes are not stopped in the correct order (in the direction of data flow). This is not an issue in small networks where a single process is controlling the reconnection, but in a larger system many different processes may be modifying the network in parallel, in which case the ordering rules cannot easily be enforced. In order to maintain the simplicity of the framework’s process interfaces, we feel that reconnectivity will best be implemented using wrapper processes, concealing the inherent complexity from component developers.
C.G. Ritson et al. / Video Processing in occam-pi
327
It would also be desirable to provide a scripting language for the specification of networks of filters (as AviSynth [1] does) and a tool for editing such networks in real-time (like GraphEdit [29]), both of which might build on ideas presented in [30]. A final area for future development is error handling. The framework at present attempts to conceal and suppress errors: an error will typically cause termination of a filter or conversion of it to a null filter which does nothing. This is an unsatisfactory state of affairs for broadcast applications; the framework should instead be able to correct errors by allowing failed components to be replaced on-the-fly. One possibility is an exception-like model, where the failed process emits an exception message providing its present state (channel ends, track information, etc.). A control process would handle the exception, make any changes required, then invoke a replacement process using the provided state information. 7.3. Final Remarks We have explored, for the first time, the use of the occam-π language’s unique concurrency features for building a video processing framework and applications. Multimedia systems must present a concurrent interface to the real world, and hence require some degree of internal concurrency. In systems developed using traditional methods, concurrency must often be simulated by explicit scheduling; in occam-π, concurrency can be expressed directly using language constructs. occam-π’s language features allow us to structure highly-concurrent programs such that they can be easily understood. Such programs can take full advantage of the concurrency features of modern processing hardware, and their safety properties can be reasoned about using formal methods. We have demonstrated that our framework has comparable efficiency to multimedia systems developed in C using traditional, non-concurrent methods. This is the result of using occam-π’s mobile data types, which allow safe, zero-copy data exchange between concurrent processes. We have described a process-oriented approach to the construction of reconfigurable components using occam-π’s mobile channel types. These components may be dynamically reconnected at runtime with minimal disruption to the rest of the system. To conclude, we have shown that occam-π and process-oriented techniques offer several advantages to developers of multimedia systems when used in preference to traditional languages and design methods. References [1] AviSynth. Open source, scripted, video post-production tool. URL: http://www.avisynth.org/. [2] VirtualDub. Open source video capture/processing utility for 32-bit Windows platforms. URL: http: //www.virtualdub.org/. [3] MPlayer. Open source multi-platform movie player. URL: http://www.mplayerhq.hu/. [4] Video LAN Client. Open source cross-platform media player. URL: http://www.videolan.org/. [5] Microsoft. DirectShow. URL: http://msdn.microsoft.com/library/default.asp?url= /library/en-us/directshow/htm/directshow.asp. [6] BBC. Kamaelia. Open source concurrent Python based toolkit with multimedia components. URL: http: //kamaelia.sourceforge.net/Home. [7] GStreamer. Open source multi-media framework. URL: http://www.gstreamer.net/. [8] GLib. C utility library and object system. URL: http://www.gtk.org/. [9] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-13-1532715. [10] A.W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall, 1997. ISBN: 0-13-674409-5. [11] R. Milner. Communicating and Mobile Systems: the Pi-Calculus. Cambridge University Press, 1999. ISBN-10: 0521658691, ISBN-13: 9780521658690.
328
C.G. Ritson et al. / Video Processing in occam-pi
[12] P.H. Welch, G.R.R. Justo, and C.J. Willcock. Higher-Level Paradigms for Deadlock-Free HighPerformance Systems. In R. Grebe, J. Hektor, S.C. Hilton, M.R. Jane, and P.H. Welch, editors, Transputer Applications and Systems ’93, Proceedings of the 1993 World Transputer Congress, volume 2, pages 981–1004, Aachen, Germany, September 1993. IOS Press, Netherlands. ISBN 90-5199-140-1. See also: http://www.cs.kent.ac.uk/pubs/1993/279. [13] Inmos Limited. The Transputer Databook (2nd Edition). Inmos Limited, 1989. INMOS document number: 72 TRN 203 01. [14] Tony King. Pandora: An Experiment in Distributed Multimedia. Comp. Graph. Forum, 11(3):23–34, 1992. [15] David May and Henk L. Muller. Using Channels for Multimedia Communication. Technical report, University of Bristol, Department of Computer Science, February 1998. [16] F.R.M. Barnes and P.H. Welch. Mobile Data, Dynamic Allocation and Zero Aliasing: an occam Experiment. In Alan Chalmers, Majid Mirmehdi, and Henk Muller, editors, Communicating Process Architectures 2001, volume 59 of Concurrent Systems Engineering, pages 243–264, Amsterdam, The Netherlands, September 2001. WoTUG, IOS Press. ISBN: 1-58603-202-X. [17] F.R.M. Barnes and P.H. Welch. Prioritised Dynamic Communicating Processes: Part I. In James Pascoe, Peter Welch, Roger Loader, and Vaidy Sunderam, editors, Communicating Process Architectures 2002, WoTUG-25, Concurrent Systems Engineering, pages 331–361, IOS Press, Amsterdam, The Netherlands, September 2002. ISBN: 1-58603-268-2. [18] P.H. Welch. Graceful Termination – Graceful Resetting. In Applying Transputer-Based Parallel Machines, Proceedings of OUG 10, pages 310–317, Enschede, Netherlands, April 1989. Occam User Group, IOS Press, Netherlands. ISBN 90 5199 007 3. [19] M. Fahndrich, M. Aiken, C. Hawblitzel, O. Hodson, G. Hunt, J.R. Larus, and S. Levi. Language support for Fast and Reliable Message-based Communication in Singularity OS. In Proceedings of EuroSys 2006, Leuven, Belgium, April 2006. URL: http://www.cs.kuleuven.ac.be/conference/EuroSys2006/ papers/p177-fahndrich.pdf. [20] Matroska. Extensible open standard audio and video container format. URL: http://www.matroska. org/. [21] American National Standards Institute. ANSI/SMPTE 12M-1986, “Television - Time and Control Code Video and Audio Tape for 525-Line/60-Field Systems”, January 1986. [22] Society of Motion Picture and Television Engineers. SMPTE 12M-1999, “Television, Audio and Film Time and Control Code”, 1999. [23] David Singer. Associating SMPTE time-codes with RTP streams, January 2006. A mechanism for associating SMPTE time-codes with media streams, in a way that is independent of the RTP payload format of the media stream itself. URL: http://www3.ietf.org/internet-drafts/ draft-ietf-avt-smpte-rtp-01.txt. [24] Colin Perkins and Stephan Wenger. RTP Timestamp Frequency for Variable Rate Audio Codecs, October 2004. IETF memo discussing the problems of audio codecs with variable external sampling rates. URL: http://www3.ietf.org/proceedings/05mar/IDs/ draft-ietf-avt-variable-rate-audio-00.txt. [25] IEEE. Std. 1394-1995 - IEEE standard for a high performance serial bus, August 1996. ISBN: 1-55937583-3. [26] Society of Motion Picture and Television Engineers. SMPTE 314M-1999, “Television - Data Structure for DV-Based Audio, Data and Compressed Video - 25 and 50Mb/s”, 1999. [27] OProfile. A system-wide profiler for Linux systems. URL: http://oprofile.sourceforge.net/. [28] TUX Magazine. TUX 2005 Readers’ Choice Award Winners. URL: http://www.tuxmagazine.com/ node/1000151. [29] Microsoft. GraphEdit. DirectX SDK tool for creating and debugging filter graphs. URL: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directshow/htm/ simulatinggraphbuildingwithgraphedit.asp. [30] D.J. Beckett and P.H. Welch. A Strict occam Design Tool. In C.R. Jesshope and A. Shafarenko, editors, Proceedings of UK Parallel ’96, pages 53–69. Springer-Verlag, July 1996. ISBN: 3-540-76068-7.
C.G. Ritson et al. / Video Processing in occam-pi
A. Diagram Notation
329
This page intentionally left blank
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
331
No Blocking on Yesterday’s Embedded CSP Implementation (The Rubber Band of Getting it Right and Simple) Øyvind TEIG Autronica Fire and Security (A UTC Fire and Security company) Trondheim, Norway http://home.no.net/oyvteig Abstract. This article is a follow-up after the paper “From message queue to ready queue”, presented at the ERCIM Workshop last year. A (mostly) synchronous layer had been implemented on top of an existing asynchronous run-time system. After that workshop, we discovered that the initial implementation contained two errors: both concerning malignant process rescheduling associated with timers and “reuse” of the input side of a channel. Also, the set of process/dataflow patterns was not sufficient. To keep complexity low, we have made two new patterns to reflect better the semantic needs inherent in the application. Our assumption of correctness is also, this time, based both on heuristics and “white-board reasoning”. However, both the previous and this paper have been produced before any first shipment of the product, and well within full-scale testing. Our solutions and way of attacking the problems have been in an industrial tradition. Keywords. Case study, embedded, channel, run-time system.
Introduction The purpose of this paper is to inspire designers of embedded systems. Our target platform is a microcontroller with 64 KB volatile and 256 KB non-volatile memory, and we are programming in ANSI C. We have a specified behaviour even more complex than that handled (just) by the previous industrial architecture. We have a stated need not to design against possible overflow of message buffers. We have a need to make the process state machines handle only their own state. So, we did … and we learned. Now, we hope that displaying and catering for the problems we encountered will increase confidence in our solution. As the final stages of the development and testing have been reached, we see that the “CSP way of thinking” has enabled us to implement a complex multithreaded system with easily described behaviour. System and stress testing (our heuristics) have indeed been encouraging for our scheduled release of the product, in which this (multithreaded) unit is a crucial part. The sub-title, and real contents, of our earlier ERCIM Workshop paper[1] was: “Case study of a small, dependable synchronous blocking channels API”. Here is its abstract (modulo a couple of linguistic fixes): This case study shows CSP style synchronous interprocess communication on top of a run-time system supporting asynchronous messaging, in an embedded system. Unidirectional, blocking channels are supplied. Benefits are no run-time system message buffer overflow and “access control” from inside a network of client processes in need of service. A pattern to avoid deadlocks is provided with an
332
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
added asynchronous data-free channel. Even if still present here, the message buffer is obsoleted, and only a ready queue could be asked for. An architecture built this way may be formally verified with the CSP process algebra. A second sub-title was “Ship & forget rather than send & forget”, suggesting that a system built on this technology should be easier to get right the first time. This paper shows that for this to happen in the product, the process schedulings have to be 100% correct. The bottom layer of the channel run-time system must indeed be correct. Even if little is repeated from [1], we will sum up its basic points: x
The bottom layer consisted of a non-preemptive SDL1 run-time system. Here, smaller messages were sent in equal N-sized chunks (N is the maximum size of each buffer position) – as many as the sender needed to send within the same scheduling. Sending was non-blocking and it could potentially cause malignant buffer overflow. This was the first objection to that architecture. The second was that, whatever its state, a receiver was not able to “protect” itself from messages. Timers were handled with a linked list of future times and any timed-out timer would enter the message queue proper. A “process” could send messages to any other process, including itself. A “driver” could only send messages to processes, and receive no messages. The Scheduler scheduled processes until the message queue was empty, and then always (for “priority”) took all drivers in turn before looping around for (lower “priority”) messages from processes.
x
On top of this we built the (CSP/occam-like) synchronous channel layer, with channel OUT, IN and ALT, and with timeouts on the two latter. An asynchronous sender-side data-free channel type was also introduced, needed to start a message sequence so that the initiator would in due time be allowed to send a rich message in the same direction as the asynchronous signal. It could be part of an ALT on the receiver side. This would disallow deadlocks when two processes wanted to send messages to each other spontaneously. This way, in the light of the asynchronous signal channel, we thought we avoided overflow buffer processes for this architecture. This paper shows where we did not.
x
With synchronous sending of data, the “CHAN_CSP” layer took care of the intrinsic “memcpy” when both parties were ready. Data messages were not any more sent in the message queue, as were ready or scheduling “messages”. The queue could now not overflow. For any one process only one ready message at a time would be in the queue. Any timer event could be in the queue if it arrived before the channel. If not it should be 100% filtered to avoid a malignant scheduling. This paper is also about what happened when this filtering was not working correctly.
Our main source of inspiration was SPoC [2], where both source code and C coding style had been related to by previous usage of occam and SPoC. However, even with occam experience on the transputer, we had no knowledge of its architecture at this level. occam has lower layers hidden for us in both these cases. Because of requirement of time to target, in the industrial tradition, and since we were to build on top of an existing asynchronous system (which neither SPoC nor the transputer did), we figured that some new implementation thinking was needed. Looking back at that assumption, we now see that it would have been easier to get it right sooner using earlier known solutions. 1
An Autronica Fire and Security developed “SDL type” run-time system, written in C. SDL stands for Specification and Description Language and is ITU-T Recommendation Z.100.
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
333
1. Run-time System Malignant Schedulings Removed 1.1 Enough is Zero Extra Timeouts In our system we have two types of timers: x
The high-level timers are function calls that a process makes in order to initialise, increment and check for timeouts, or to stop – all based on a low-level single 32 bit 1 ms system tick. Detecting that half an hour has passed is then up to the process by exercising the function calls directly. All functions on these timers are “atomic” since they operate on a read-only copy of the global clock taken immediately after rescheduling in each process.
x
Additionally, time outs on channels (always as part of an ALT) or when no channel is involved (delay) are implemented. This is what we thought worked, but actually it did not.
We had been aware of the underlying problem for a long time: that in the asynchronous run-time system on which we built the CSP layer, it was not 100% assured that a timer, once subscribed to, would not arrive too late – even if it had been explicitly (or implicitly) cancelled. When the timer event had come as far as into the message queue (and was not just a node in the sorted list of timeouts), it was too late. The solution in that system was to “colour” timers in a certain way to detect later that that timer event could be discarded. This was all done at application level by the process. Then, the process in a way becomes its own scheduler. However, a CSP process is not allowed to be shaken by unanticipated scheduling. We could have made an exception and done the same colouring as mentioned. After all, it was not a channel that would have caused the scheduling, so no other process would have been involved. Still, we went for the clean solution and thought this would be easier in the end. In order to do this, we did an invisible (to the process) colouring of the timer and invisible discard of it. But this was the final result. Let us go back some steps. Prior to this (when [1] was written), we had implemented the filtering of unwanted timer events by #ifdef’ing some code in the run-time system, based on the state of the process’ ALT. (With all C #ifdefs, the original run-time system is unaltered, albeit not uncluttered – just like all other operating systems we have studied.) The ALT state takes one of the three values: typedef enum { CHAN_ALT_OFF_A = 0, CHAN_ALT_ENABLED_ON_A = 1, CHAN_ALT_ENABLED_INACTIVE_A = 2 } StateALT_a;
If the ALT state was “inactive” and there was no reason to take action, we threw the timer event away by not letting it into the message queue. So far so good. This worked in a product where all channel input was part of an ALT. But with new users, individual inputs and delays were used and then, after some time, we saw processes crash from unwanted timer events. The problem was that the ALT state naturally follows the life of a process, the “inactive” state may be a later phase’s “inactive”; similarly for the “not inactive” state. So, we threw away too few. In order to fix this, we had inserted a filter in the Scheduler itself, based on a boolean in the process’ context. Thinking it over now, this filter had absolutely no effect. But it had seemed that all unwanted timer events were gone. We would have to do it 100% safe next round.
334
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
We saw that for a timeout event to be “taken” by a process, one precondition was that the process should not ever have been scheduled in the meantime. So, we introduced a mandatory second value in the process context (after the mandatory State, both reachable from Scheduler). We called this ProcSchedCnt and let the scheduler increment it every rescheduling. Now, whenever a timer has timed out and is first in the timer list, we check that the process ProcSchedCnt and the ProcSchedCnt present in the timer (copied when the timeout was asked for) are equal. Now we do this only at the place where a timer would enter the ready queue, not when actual scheduling takes place. The mentioned “absolutely no effect” filter was then also removed. We now test on ProcSchedCnt and StateALT and know that no other scheduling event would be present in the ready queue if ProcSchedCnt and StateALT were used to filter. So, a timer scheduling event that reaches the Scheduler, is now indeed a true timeout. A boolean instead of ProcSchedCnt cannot be used here. The process could go on and ask for a new timer, and do this again and again. A boolean state cannot distinguish these states from the state in which the process was, when it asked for the initial timer. A “linear” tagging must be done. The only problem we see with this is the word width of ProcSchedCnt. We have chosen 16 bits. It is very unlikely that a process should have been scheduled some 65 thousand times before the timer event fires. An 8 bit value probably would work fine, too. An effect that helps here is a built-in feature of the underlying SDL system to cancel any earlier timers once we order a new. Even if it were not able to, as mentioned, cancel it once it had entered the message queue. 1.2 A Channel must be Empty from Both Sides Before a Refilling is Attempted This problem involved two processes. To analyze the error scenario was the hardest. Once we understood the problem, the solution worked for all cases – but with a small run-time penalty. The error had not been seen in an earlier use of the CHAN_CSP layer. This time, new users had reused an input channel differently than in the previous product. A process may be “first” on a channel either when it is blocking for input on it (alone or as part of an ALT) or blocking for an output. The output blocking is final; the process would only ever be scheduled again when a true output has been done. We have no timeout on output. However, the input “first” process on a channel may be cancelled if it is in the set of an ALT that was not taken. Almost worse, it may appear again if a new ALT or input is done after this. The error appeared only during certain scheduling sequences. Sarcastically, we would say it behaved like an asynchronous non-blocking system: the sequence of schedulings will happen a year after shipment, and then the service telephone would ring more and more often. Of course, the designers of such a system would have “seen” this event in their state diagrams, and so it would have been taken account of, and after the year all would still be fine. No service trip. After all, asynchronous designs will also work, if designed and implemented correctly. Luckily for us, the misbehaving sequence(s) appeared during tests. Initially, when a channel had been taken by the second contender, and the process was scheduled, the channel’s “first” info was cleared to “none”. After this any next “first” would not cause any extra scheduling because it would truly be first. This seemed until the tests, to be working. But then we saw that some “first” information was not cleared, causing pathological reschedulings. The new “first” was seen as “last” since there seemed to already be a “first” on the channel. When we started looking for this error, we saw that this
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
335
was fine for the channel in mind, but not for the other channels being part of an ALT. The people at Inmos must have seen this when they implemented the ALT on transputers in the 1980s. We had known that the usual ALT implementation started with a set-up; and when the ALT had been taken, a tear-down. To our surprise we imagined that we did not need this. At least, not before we were forced2. Much code had been written, and we did not want to modify too much in user code. And we did not really want to shake the CSP layer by including set-up and tear-down functions. We were too far into the project. Our solution requires a minimum of user code changes. The StateALT variable is one per process, not one per channel. We decided to include a list of pointers in the ALT state structure: pointers to the maximum set of channels ever to participate in the ALT – pointers to their “first” variables. When a channel was taken on an ALT, we were now able to “null” not only the participating channel, but also the others. But we had to make sure that removing the first process on the channel could only be done if the first was in fact “this” process. If not, we would effectively stop other future communications. The change in user code was in the single module that initialises all the channels (they are global C objects, so this is possible). We also had to initialise the new pointers to the channel’s “first” values. The price we had to pay for this is the added run-time overhead of the looping to test and clear the “first” values. But this is acceptable in our application. We think the CSP layer now has matured to a level where we feel quite at rest. The SDL layer had reached this level before we built CSP on top of it. 2. Architectural Patterns Extended 2.1 Two Times Zero Solved a Case better than One Times Zero
Figure 1. Pattern, here with two asynchronous channels (cut-out from our main diagram) Now over to the application. The diagram in Figure 1 shows two processes communicating over data-free asynchronous channels (dotted lines with arrows) and one synchronous channel in each direction containing data (solid line with two arrows). The contract between the processes is deadlock free. The lower process may always send on the up Chan_201 (and get or not get a reply immediately on the down Chan_200). If the upper process wishes to initiate a sending down it starts with a non-blocking signal on Chan_202 2
With hindsight we should have also reused that part of SPoC [2]. In another world (where C/C++ would not rule), we could have used occam & SPoC plus C for the project.
336
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
or (see next paragraph) Chan_204. It then, sooner or later, receives a response on Chan_201 saying “ok, come on” after which it immediately sends down on Chan_200 what it had. While the upper is waiting for the reply, it is perfectly well able to serve an event or a directive/response pair. The diagram of this is in [1]. Further semantics are that the upper will not try to send down again before it has got rid of the present message. This has the added benefit that the addition of the asynchronous channel, like the synchronous scheme, will never cause message queue overflow. Note that we now have two asynchronous channels down. One is for high priority and the other for low. During long states, like taking up a fire detection loop, which may take minutes, the upper process may want to send data down. It does not know (and should not know) the state of the lower. So it gets a reply from down saying: “busy, try me again later”. The lower process had its single (at the time) asynchronous channel switched off in the ALT guard in this state. But then we wanted to send a high priority escape signal saying to the lower: “stop what you are doing even if it is important”. It was this need that had us introduce the “busy” reply. Now, the high priority asynchronous channel never has the lower process send a “busy” reply up, it will always pull down the escape type command we had in mind. This could have been solved with an extended communication sequence between the two processes, even with a single asynchronous channel: the lower could have asked for the boolean information explicitly. But inserting the extra asynchronous channel was fine – and it gives less overhead than any extra communications would have done. 2.2 The Deadlock-Free Pattern that was Too Simple for a Certain Complexity Much design is done in collaboration. The group members (some 5 of us) have known the application domain for years. Good and bad solutions, failures and successes were, so to say, implicitly on the table during discussions. We knew what to make, not least because we had been supplied with a specification. We also knew what not to repeat. And since we had decided not to use the UML based modeling and code generation tool that the host processor team used, we had decided for a process/dataflow model of the architecture. Textual descriptions and message diagrams were our main tools, in addition to informal discussions and white boards. During the architectural discussions, we saw that data would flow as spontaneous and directive / response protocols – in both directions. Even in the complete system data would spontaneously flow both from the lowest level processes to the top, and the other way down. Process roles became important. We therefore needed the above described deadlockfree pattern between processes that needed both types of protocols. Great was our surprise when we discovered that there was higher-level behaviour implicit in the system that rendered our deadlock free pattern so complex to use that we needed to devise something different. In the final design we removed the asynchronous up Chan_303 (Figure 2) with the synchronous up Chan_301 (Figure 3) and introduced an overflow buffer (composite) process. The contract now is that the upper process never sends down more packets than the downward overflow buffer P_OBuff may hold – i.e. that the overflow buffer will never overflow. Therefore the upper process is always able to receive from down. Down will never block on a sending up, just like with the asynchronous channel case. So, the design is still deadlock-free.
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
337
Figure 2. Standard pattern with one asynchronous channel (cut-out from our main diagram)
Figure 3. Overflow buffer pattern and no asynchronous channel (from our main diagram) The implicit requirement is that the lower process handles both upcoming spontaneous (alarm type) events and sends directives down (to a process which handles the detector loop); and that it may wait for responses from detectors on the loop (going up). But the alarm type events are so important that they are queued in the lower process and need to be sent up instead of a response from a detector loop unit on an earlier directive. We try to avoid implementations where a process tends to be a scheduler of itself. When “up” gets a message from “down” that it has alarms, they are pulled up by a “give me more” message until the last message comes from “down” saying that this was the last. At the same time a directive down or a reply to “up” may be pending. The main problem was that the up-going event sequence might cross with a down-going event, and that we would need a complex solution to buffer down-going events in the lower process. Or, to end up with the simple solution that we now have: one single down-going event is stored away in the lower process, to be picked up when the up-going event queue becomes empty. We essentially replaced the zero-information initial asynchronous signal to “up” with a data rich initial message from “down” to “up”. We then avoided an extra sequence to establish the higher-level states of the two … and the need for the lower process to become a complex buffer handler and scheduler of itself … and the added complexity of the protocol. The essential point here is that the initial architecture was traded for a seemingly more complex one to make the inner semantics and the protocols and message diagram simpler. Much anguish was experienced by the white board before we saw the picture. We felt a rising familiarity with a new set of tools: processes; synchronous vs. asynchronous channels; deadlock avoidance; internal vs. external complexity; seeing when the behaviour of
338
Ø. Teig / No Blocking on Yesterday’s Embedded CSP Implementation
one process “leaks” into another; understanding the importance of, in some states, not listening to a synchronous (or sender-side asynchronous and data-free) channel – and last but not least: how to get rid of complexity by using these tools and combining from the best of our knowledge. 3. Conclusion The CSP concept used in a small embedded controller certainly has helped us in making a complex design implementable. We have some 15 processes and drivers and, since roles are so clearly defined, the behaviour is explainable both at the process and the system level. (A team member burst out with this statement: “this is tenfold simpler than what I used to work with”.) The main objective was that there is now no need to handle just any message in any internal state. Once the channel functionality is in place, it always works. However, we have also seen how sensitive the methodology is to having a correct runtime layer. Assuring that the implementation is in fact correct is not trivial. Our strategy to help with this is always to “crash” on a rescheduling that is not anticipated, and be open about it and rectify immediately. Our heuristics, then, is the absence of any such behaviour. This is of course in addition to white-board studies of the solutions. Also, having a critical view on communication patterns versus internal complexity, or cross-process complexity, has helped us. Rethinking one may help in simplifying the other. Acknowledgements The project leader Roar Bolme Johansen and fellow channel communicator Bjørn Tore Taraldsen for enduring non-blocking behaviour. My wife Mari, for just being. References [1] Øyvind Teig, “From message queue to ready queue (Case study of a small, dependable synchronous blocking channels API – Ship & forget rather than send & forget)”. In ERCIM Workshop on Dependable Software Intensive Embedded Systems, in cooperation with 31st. EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), Porto, Portugal, August/September 2005. Proceedings: ISBN 2-912335-15-9, IEEE Computer Press. [Read at http://home.no.net/oyvteig/pub/ pub_details.html#Ercim05] [2] M. Debbage, M. Hill, S. Wykes, D. Nicole, “Southampton's Portable occam Compiler (SPOC)”, In: R. Miles, A. Chalmers (eds.), ‘Progress in Transputer and occam Research’, WoTUG 17 proceedings, pp. 40-55. IOS Press, Amsterdam, 1994.
Øyvind Teig is Senior Development Engineer at Autronica Fire and Security, a UTC Fire and Security company. He has worked with embedded systems for some 30 years, and is especially interested in real-time language issues. See http://home.no.net/oyvteig/ for publications.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
339
A Circus Development and Verification of an Internet Packet Filter Alistair A. McEWAN Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK
[email protected] Abstract. In this paper, we present the results of a significant and large case study in Circus. Development is top-down—from a sequential abstract specification about which safety properties can be verified, to a highly concurrent implementation on a Field Programmable Gate Array. Development steps involve applying laws of Circus allowing for the refinement of specifications; confidence in the correctness of the development is achieved through the applicability of the laws applied; proof obligations are discharged using the model-checker for CSP, FDR, and the theorem prover for Z, Z/Eves. An interesting feature of this case study is that the design of the implementation is guided by domain knowledge of the application—the application of this domain knowledge is supported by, rather than constrained by the calculus. The design is not what would have been expected had the calculus been applied without this domain knowledge. Verification highlights a curious error made in early versions of the implementation that were not detected by testing. Keywords. Circus, Development, Verification, Reconfigurable hardware, Handel-C
Introduction In this paper a case study where a security device is specified, designed, and implemented, is investigated. The interesting aspect of this case study is that the device is to be implemented in hardware, and some of the design requirements are dependent on the hardware on which it deployed. The development of the device is guided by the laws of Circus, and proof obligations for development steps centre around proving the correctness of the resulting refinement on each major development phase. The case study in question is an Internet packet filter [3,6], a device which sits on a network, monitoring traffic passing through it, and watching for illegal traffic on the network. Typically, these devices can be employed to monitor, route, or prevent traffic on networks: in all of these cases, but particularly in the case of prevention, confidence in the correctness of the implementation is necessary if network security is to be assured. The major contributions of this paper can be summarised by the following points: 1. The presentation of a top-down design strategy from a set of requirements and a verification strategy for such a development using Z/Eves and FDR. 2. A demonstration of the calculation of concurrency from a sequential specification using laws of Circus presented in [5], including the generalised chaining operator. 3. The presentation of design patterns for refining Circus processes into Handel-C models, incorporating a model of synchronous clock timing. 4. Evidence that laws of Circus allow the exploration of refinements, guided by engineering intuition where requirements may not have been explicit in the specification.
340
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter 0
3 ver IHL
7
15 16 service total length type
identifier time to live
protocol
flags
31
offset
header checksum
source address destination address
Figure 1. An IP v4 packet header
Section 1 presents the problem domain, and some relevant background material. This is followed by a formalisation of requirements and an abstract specification in section 2. In section 3 through to section 6, different components of the implementation are developed and verified. Section 7 presents the composition of this development as a final Handel-C model. Finally, in section 8, some conclusions are drawn. 1. Background 1.1. Packet Filters An Internet packet filter is an application that monitors network traffic, and performs actions based on criteria concerning that traffic. In this case study, the packet filter monitors traffic on a local section of Ethernet, flagging observations of predetermined source/destination address pairs. An important property of a monitoring device such as this one is it must not interfere with traffic of no concern to it: essentially, its presence should be effectively unobservable unless it is required to take action. The packet filter assumes traffic is transmitted using the Internet protocol (IP), version 4 [2,9]. In IP v4, a packet consists of a header, and a payload. The header contains accounting information, whilst the payload contains the information itself. For instance, if a user were accessing a web page, the header would contain information such as their machine address, the address of the web server, and the size of the payload; while the payload would contain (parts of) the web page itself. The structure of an IP v4 packet header is given in figure 1. Traffic is assumed to be transmitted as a byte-stream. The application should passively observe this byte-stream, identify when sections correspond to a packet header, and investigate the addresses contained within. This is a non-trivial task: the stream is passing at a rapid rate, and the vast majority of the stream will be payload data. The device must be able to identity a packet header, including performing necessary checksum calculations, extract source and destination address from the header, compare it to a dictionary of known addresses, and return the result of this comparison before the full header has passed through the stream; and this must be done with the minimum amount of interference to the stream. 1.2. Identifying Packets Packet headers are identified in the stream by performing a checksum calculation, which should equal 0 in ones complement, checking the IP version number, which should equal 4, checking the least significant bit, which should always be 0, and checking the protocol number, which in this case should be 6, representing TCP/IP. When IP packets are identified, they are further examined to identify whether or not the packet requires action by the filter. In this case study the filter does not attempt to prevent passage—it simply observes and acknowledges source/destination address pairs predetermined to be of interest.
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
341
1.3. Motivation and Related Works Previous works exist describing how a filter such as this can be implemented on a standard FPGA in a network environment [3,6]. These works explain the architecture of the packet filter and propose techniques, implementable in hardware, which allow the identification of packets and the lookup of addresses in real network time—a time that is no longer than it takes for the network to communicate the traffic. Experimental measurements are presented justifying the design and implementation on a small commodity FPGA in 100M-Bit Ethernet. Attempts to verify the correctness of the implementation exist in [12,4]. These works use the model of the Unifying Theory to model the existing program. The model uses both Z and CSP constructs, and whilst their combination is justified in the Unifying Theory, they are not combined in a syntactic way to give a structured model in the way that Circus does. Nevertheless, these works show that such a model is feasible in modelling the implementation if certain problems can be overcome—for instance, infinite trace generation arising from modelling clock ticks. These works all contribute to modelling existing programs. This paper goes further in showing that with an understanding of Handel-C and the FPGA, an abstract specification can be developed into a concurrent implementation. The structure of the Circus specification combined with laws for calculating concurrency allow for the structured verified development of the implementation. 1.4. Content Addressable Memories A Content Addressable Memory (CAM), also known as an associative memory, is a device in which search operations are performed based on content rather than on address. Retrieval of data in a CAM is done by comparing a search term with the contents of the memory locations. If the contents of a location match the supplied data, a match is signalled. Typically, searching in a CAM can be performed in a time independent of the number of locations in the memory, which is where it differs to, for instance, a hash table. Various CAM architectures, and associated speed/area cost trade-offs have been proposed [11,8]. 1.5. Building a CAM on an FPGA A CAM needs the following: • • • •
Storage for data Circuitry to compare the search term with the terms in memory A mechanism to deliver the results of comparisons A mechanism to add and delete the data in memory if the dictionary is not fixed
Conventional CAMs require circuitry to perform a word parallel bit parallel search on words in memory. While this offers the fastest lookups because of the fully parallel nature of the search, it has very high hardware costs. [6] shows how the packet filter application benefits from content based lookups, as fast, constant time lookups are required over an arbitrary data set in the form of a pipeline data stream. The design of CAM adopted is called a Rotated ROM CAM [6,3], shown in figure 2. Each dictionary word stored has an associated comparator, and this comparator iterates along the word, comparing the relative positions in the dictionary with the search term word. Its simplicity makes it an ideal CAM architecture to implement on an FPGA because the reconfigurable nature of the FPGA means that the ROM can be designed to be as wide as the number of words in the CAM dictionary and as deep as the width of the words in the dictionary. This design is chosen as it exploits the architecture of the FPGA, allowing a trade-off between hardware costs and the speed of lookups. The trade-off is in the way that the words in memory are not searched in a fully parallel manner,
342
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
W 1-bit comparators
Storage Locations
Reference
word
Figure 2. A word parallel, bit serial CAM
thus reducing the amount of hardware required to search the dictionary, bringing with it an increase in the time it takes to search the dictionary. 1.6. Implementation Architecture The application consists of a number of discrete systems. These are: • • • •
A feeder process taking bytes from the stream and passing them to the packet detector The packet detector The search engine A process to output results
The packet detector is the process responsible for monitoring the data passed to the application, and signalling when a subset of this data constitutes an IP packet. When a packet is detected, the search engine decides if it is a packet of interest by comparing the source and destination addresses with those stored in the CAM. 1.7. The Approach to Verification In this paper, many laws of Circus are used—for instance to manipulate actions into a form where they may be split. Each law typically has a proof obligation associated with it. Discharging of all these proof obligations by hand in a large development is infeasible: there may be many of them. The technique adopted in this case study is much more pragmatic, and is intended to work more as a realistic industrial development. Laws of Circus are repeatedly applied in order to manipulate the actions into the required form. Instead of proving the application of each law, the proof is to show that the final result is a valid refinement of the starting point. This proof obligation is met by model-checking. The CSPM used in verification for each stage may be easily reconstructed. In order to alleviate the problem of state space explosion in FDR, some simplifications have been employed in the CSPM . Firstly, only three addresses exist. Two of these are in the CAM—meaning that concurrent lookups are required. The third address is not in the CAM, meaning that there is an address not of interest. Secondly, the function addr that returns the address (pairs) in a packet reads the value of many pipeline cells1 , but in the CSPM it only looks at one. Thirdly, each cell may only have a value in the range 0..3. Fourthly, the checksum calculation is simplified: instead of being a predicate over a number of cells, 1 The pipeline is the component of the application that stores data read in from the network, and consists of a series of individual cells. As an analogy, the pipeline may be thought of as an array, and a cell is analogous to an element in the array.
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
343
it only considers the first cell in the pipeline. A data refinement could be proved between this simplification and the real data types implemented; however this is not necessary as the CSPM verification is really concerned with the action structure and not the data values. Other investigations may require this. The CSPM model grows very large when the pipeline is greater than five elements. This has an impact on the CAM lookup, which should take 16 cycles. The dictionary therefore has been carefully chosen such that the words are all distinguishable in the first few bits, so correct results are known long before 16 comparisons have been made. In this way, the requirement that the CAM can output a correct result before a packet has left a (shorter) pipeline may still be met. Some laws require manipulation of user state. This user state, and operations on it, are encapsulated by Z schemas. Where there are proof obligations associated with schema and abstract data type manipulation, these are discharged using Z/Eves. The source input for Z/Eves is in-lined LATEX in this paper. In some cases, this LATEX has been manipulated into a more readable form for presentation purposes—for instance in-line schemas— that requires to be re-written back as full schemas before it can be re-run in Z/Eves. For Z theorems, the tactics needed to instruct Z/Eves to complete the proof are listed in the associated proofs. The complete development, including all of steps and the tool applied to proof obligations at each step, is given graphically in figure 3, figure 4 and figure 9. The first diagram represents the decomposition of the abstract specification into the three major components. The second diagram represents the decomposition into a chained pipeline and a CAM with sequentialised lookups. The third represents the refinement of this model into a clocked Handel-C implementation. 2. An Abstract Specification In this section an abstract sequential specification of the packet filter is presented. The packet filter reads in a set of bytes from the network; looking to spot addresses. At this stage of development it is not necessary know detail about the representation of these, so they are specified as given sets. Definition 1 Given sets and axiomatic definitions : [Byte, Addr , BitPair ] RESULT ::= yes | no dictsize : N pipesize : N addr : (N → Byte) → (Addr × Addr ) chk : (N → Byte) → B last : (N → Byte) → Byte dictsize ≥ 1 pipesize = 20
2 Addresses of interest are to be stored in a dictionary, while the bytes currently being examined will be stored in a pipeline. At this stage of development it is not yet known how many addresses will be stored in the dictionary when it is finally deployed, so this constant is left loose. An IP header is 20 bytes long: this constant is the length of the pipeline. 2 2 The development could have started with a more abstract description of the problem, with a data refinement between it and the concrete. However in this paper the concern is developing the concrete data model into a concurrent implementation.
344
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
The partial function addr takes a sequence of bytes and returns a pair of addresses, while the function chk takes a sequence of bytes and returns a boolean result. The former will later be used to extract addresses from a packet, and the latter to identify a packet. The function last takes a sequence of bytes and returns the element at the end of the sequence. The precise definitions are omitted for space. Definition 2 The abstract packet filter : process Filter = begin USt = [ pipe : N → Byte; dict : N → (Addr × Addr ) | dom pipe = 1..pipesize ∧ dom dict = 1..dictsize ]
UpdateAll ΔUSt b? : Byte pipe = {x : 1..pipesize − 1; y : Byte | x → y ∈ pipe ∧ x + 1 ∈ dom pipe • x + 1 → y} ∪ {1 → b?} dict = dict
Run = var c : N; h : (Addr × Addr ); μ X • (in?b → SKIP ||| out !last(pipe) → SKIP ); if ¬ chk (pipe) then UpdateAll ; X else ( h := addr (pipe); c := 1; μ Y • UpdateAll ; if c < pipesize − 1 then (in?b →||| out !last(pipe) → SKIP ); c := c + 1; Y else (in?b → SKIP ||| out !last(pipe) → SKIP ||| match!(h ∈ dict)) → SKIP ); X ) • Run end
2 The local state of the abstract system, given in definition 2, has two components: a pipeline and a dictionary. As the pipeline will be built in hardware, the state invariant does not allow its size to ever change. The size of the dictionary is also constant. Unusually, an initialisation operation has not been specified for these. In the case of the pipeline, this is because initially the values are meaningless: when hardware is powered up, registers have an arbitrary value. In the case of the dictionary, this is because the specification is purposefully being left loose with regards to the addresses of interest. When the final implementation is deployed, these would be given.3 An operation to update the local state exists. This operation takes the state of the pipeline, and a new byte as input. It adds the new byte to the head of the pipeline, and drops the last byte in the pipeline. The dictionary remains unchanged. There are no outputs. The main action of the process reads an input into the local variable b and outputs the last element in the pipeline. Then the predicate chk returns true or false indicating whether or not it believes the pipeline to correspond to a packet header. If not, the new data is stored and shifted. If it does then the address pair in the pipeline is recorded in the local variable h before the shift. Another pipesize − 2 shift cycles are permitted before a result is output on the channel match indicating whether or not the addresses were known to the dictionary. The condition on pipesize ensures that the result is know to the environment before the data has fully left the pipeline. 3 Where, and when these were given would depend upon the target implementation environment. For instance, for a purpose built hardware CAM, they could be included in the state invariant; whilst for a software implementation such as a hash table, it may be done by adding a new operation to initialise the dictionary variable.
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
345
3. Refining the Abstraction into a Pipeline and a Checksum 3.1. Process Splitting A technique for introducing concurrency into a Circus specification is process splitting, discussed in [10] and built on in [5]. If the process paragraphs are disjoint—i.e, the actions of the process each access different components of the state—then the process may be split in two with respect to those disjoint actions and state. Generally processes must be manipulated into an appropriate form. Let pd stand for the process declaration below, where Q.pps and R.pps stand for the process paragraphs of of the processes P and Q; and F for an arbitrary context (a function on processes). The operator ↑ takes a set of process paragraphs P and schema expressions Q, and joins the process paragraphs of P with ΞQ. For the expression to be well-formed, the paragraphs in P must not change any state in Q. This is the general form of processes to which process splitting laws apply. process P = begin State = Q .st ∧ R.st Q .pps ↑ R.st R.pps ↑ Q .st • F (Q .act , R.act ) end
The state of P is defined as the conjunction of two other state schemas Q.st and R.st. The actions of P are Q.pps ↑ R.st and R.pps ↑ Q.st. They must handle the partitions of the state separately. In Q.pps ↑ R.st each schema expression in Q.pps is conjoined with ΞR.st. Similar comments apply to R.pps ↑ Q.st. Law 1 Process splitting pd = (processP = F (Q.act, R.act)) provided Q.pps and R.pps are disjoint sets of paragraphs with respect to R.st and Q.st.
2
Two sets of process paragraphs pps and pps are said to be disjoint with respect to states s and s if and only if pps = pps ↑ s and pps ↑ s, and no action expression in pps refers to components of s or to paragraph names in pps ; further, no action in pps refers to components of s or to paragraph names in pps. The development is aimed at producing an implementation using the Rotated ROM CAM, thus meeting area and speed requirements on the FPGA. The abstract specification must be split into three components: a pipeline, a CAM, and a checksum calculation. Each of these can then further be refined into their respective Handel-C implementations. Inspecting definition 2 suggests a strategy. The main action may be split into two: one which acts on the dictionary, one on the pipeline; the requirement is that dictionary and pipeline state must also be split. To split Run, it must be manipulated into a suitable form. 3.2. Splitting the Main Action The implementation is to contain a pipeline of cells, storing data from the network. As these are to be implemented as concurrent registers, user state in each cell must be disjoint from the next—the requirement for Process splitting (law 1). Counter-intuitively, therefore, the first development step is to separate the checksum calculation from the rest the application.
346
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter Key
Filter
Processs User State (Z)
USt
Process action
Run CamChecksumB1
Pipeline A1 PipeSt Run A1
Z/Eves FDR
CSP channel State split Action split/refinement
DictSt Run B1
pass, block
Checksum
Cam E DictSt
Run D
FDR
Run E
get, ready, done
Figure 3. The development and proof strategy for the first system refinements
This is non-obvious as a first step, but is vitally important. The checksum requires to inspect the value of a number of pipeline cells, and given that the states must be disjoint it cannot be implemented as a global predicate over these cells. The specification must be manipulated such that the checksum inspects a local copy of the pipeline. The goal of this development phase is to split the component of the action that maintains the pipeline from that which calculates the checksum and performs the lookup. This is done by replicating the behaviour in two actions, and then removing the unrequired behaviours in each. In doing so, there are several synchronous properties of the specification that care must be taken to preserve. Firstly, in and out occur pairwise. Secondly, when a match occurs, it must interleave the correct pair of in and out events. Thirdly, the address to be looked up must be stored and made available on the correct in-out cycle. To achieve this, two new events are introduced. The event block is used to de-limit in-out cycles after each UpdateAll operation. On each iteration a second new event pass is introduced that communicates the values held in the pipeline to those who may desire read access. In this, and following definitions the actions that pass local state across are factored out in definition 3 for presentation. Definition 3 Passing pipeline state : Pass = in?b → SKIP ||| out !last(pipe) → SKIP ||| (||| i : dom pipe • pass.i!pipe(i) → SKIP ) in?b → SKIP ||| out !last(pipe) → SKIP ||| (||| i : dom pipe • pass.i?copy(i) → SKIP ) Pass = Match = Pass ||| match!(h ∈ dict) → SKIP Match = Pass ||| match!(h ∈ dict) → SKIP
2 Definition 4 Introducing internal events block and pass, and a new concurrent action : RunA = var c : N; h : (Addr × Addr ); μ X • Pass; if ¬ chk (pipe) then UpdateAll ; block → X else h := addr (pipe); c := 1; μ Y • UpdateAll ; block → SKIP ; if c < pipesize − 1 thenPass; c := c + 1; block → Y else Match; block → X
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
347
RunB = var copy : N → Byte | dom copy = 1..pipesize; c : N; h : (Addr × Addr ); μ X • Pass ; if ¬ chk (pipe) then block → X else h := addr (copy); c := 1; μ Y • block → SKIP ; if c < pipesize − 1 then Pass ; c := c + 1; block → Y else Match ; block → X Run = (RunA |[ {|in, out , match, pass, block |} ]| RunB )
2 The checksum behaviour must also be factored out. This step relies on the property that when det(P), P = P P. Run is deterministic, so may be placed in parallel with a second copy of itself. However, although the two actions synchronise on their events—preserving the deterministic property—the operation schemas and state variable assignments do not. If this were disregarded the action would, for instance, execute UpdateAll twice every time it were intended to execute once. The second copy of this action therefore has its own copies of local variables c and h; and does not write to global state. In fact, this development step goes one stage further: it introduces a new variable copy to the new action that has the same type as the pipeline, and each pass cycle updates the local copy. This step further relies on laws for local variable introduction, and for introducing a direction in the pass communication. The replicated action is given in definition 4. Definition 5 Removing replicated behaviours in the pipeline : RunA1 = μ X • Pass; UpdateAll ; block → X
2 The next step is to separate concerns between the two actions—this relies on the properties of synchronisation and distributed co-termination of the in-out-pass sequence. RunA is to form the pipeline, therefore RunB should not engage in in or out. Given that block was introduced to de-limit the in-out sequences, then in and out can safely be dropped from RunB and removed from the synchronisation. In fact, the same argument also holds for match: the pipeline should not be aware of matches, therefore it can be dropped from RunA . This allows a further simplification: the variable h no longer plays a role in RunA , so its scope may be restricted to RunB . Moreover, the role played by c was to implement a loop that caused a number of shifts before a match—this is no longer necessary in RunA . Each of the two components may be individually labelled. This is given in definition 5 and definition 6, where RunA1 reads data in and out whilst passing it across to the RunB 1 , which records this data and indicates the result of a lookup when appropriate. Definition 6 Removing replicated behaviours in the CAM and checksum : RunB 1 = var copy : N → Byte | dom copy = 1..pipesize; c : N; h : (Addr × Addr ); μ X • (||| i : dom copy • pass.i?copy(i) → SKIP ); if ¬ chk (copy) then block → X else c := 1; h := addr (copy); μ Y • block → SKIP ; if c < # dom copy − 1 then (||| i : dom copy • pass.i?x → SKIP ); c := c + 1; block → Y else (||| i : dom copy • pass.i?x → SKIP ||| match!(h ∈ dict) → SKIP ); block → X
2
348
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
RunA1 and RunB 1 can be seen to be disjoint with respect to user state. There is no proof obligation associated with this— but if it were not true, further development would fail. However there is an obligation to show that the new actions have been correctly derived— theorem 1, which states that the parallel combination of the new actions is a refinement of the specification. This may be proved by asserting the equivalent refinement relation using FDR. Theorem 1 The calculated actions are a refinement Run A (RunA1 |[ {|pass, block |} ]| RunB 1 ) \ {|pass, block |} Proof assert Run FD (RunA1 |[ {|pass, block |} ]| RunB 1 ) \ {|pass, block |}
2
3.3. Partitioning the Global User State Although the actions have now been split, they both still reference the single global user state. RunA1 accesses the pipeline state, while RunB 1 accesses the dictionary. In order to show that this process meets the form applicable to process splitting, these states, and the operations upon them, must be disjoint. In this section, the disjoint states are calculated and verified using Z/Eves. By relying on the properties of schema conjunction, the original user state can be split in two. Definition 7 Partitioned user state : PipeSt = [ pipe : N → Byte | dom pipe = 1..pipesize ] DictSt = [ dict : N → (Addr × Addr ) | dom dict = 1..dictsize ]
2 Theorem 2 The partitioned states are correct : Ust ⇔ (PipeSt ∧ DictSt) 2
Proof prove by reduce; The update operation only acts on the pipeline, and leaves the dictionary unchanged. Definition 8 Partitioning the Update operation : UpdateAll = [ ΔPipeSt ; b? : byte | pipe = {x : 1..# dom pipe; y : Byte | x → y ∈ pipe ∧ x + 1 ∈ dom pipe • x + 1 → y} ∪ {1 → b?} ]
2 Theorem 3 The partitioned Update is correct : Proof prove by reduce;
UpdateAll ⇔ (UpdateAll ∧ ΞDictSt) 2
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
349
3.4. A First Application of Process Splitting The actions and the state are now of the correct form for Process splitting to be applied. The abstract pipeline is a very simple process. It contains an abstract data type that holds all of the data present in the pipeline. On each iteration, it reads in a new value, outputs the oldest value, informs the environment of the current state of the pipeline, and then shifts all the data. Definition 9 The abstract pipeline process : process PipelineA1 = begin PipeSt = [pipe : N → Byte | dom pipe = 1..pipesize] definition 8 UpdateAll = RunA1 = definition 5 • RunA1 end
2 The abstract checksum and lookup process of definition 10 contains the dictionary. The main action of this process performs the checksum calculation on the local copy of the state that is passed to it on each pipeline shift, and outputs the result of the lookup accordingly. Definition 10 The abstract CAM/checksum process : process CamChecksumB 1 = begin DictSt = [dict : N → (Addr × Addr )] definition 6 RunB 1 = • RunB 1 end
2 The complete packet filter is the pipeline in parallel with the checksum synchronising on the channel used to pass pipeline state across. Both also synchronise on the channel block , thus maintaining the synchronous nature of the behaviour between the two components. Definition 11 The split packet filter : Filter = (CamChecksumB 1 |[ {|pass, block |} ]| PipelineA1 ) \ {|pass, block |}
2 4. Implementing the Pipeline as Concurrent Cells The next stage of development is to implement the pipeline process as an array of concurrent cells, using the generalised chaining operator of [5]. 4.1. Implementing the Update Operation [10] shows a Z-style promotion distributes through a Circus process. By factoring out a promotion from the abstract specification of the pipeline, a single cell is exposed. However, the UpdateAll operation describes a shift of the entire pipeline: the first task, therefore, is to rewrite this in terms of a single element. The schema Update defines an update on a single element of the pipeline, identified by the input variable i . Definition 12 A single update : Update = [ ΔPipeSt ; b? : Byte; i? : N | pipe = pipe ⊕ {i? → b?} ]
2
350
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter Pipeline
Checksum
A1
Cam
PipeSt
DictSt pass, block
Run A1 Z/Eves Promotion
Run D
get, ready, done
FDR Action chaining
E
RunE FDR
Introduce sequential concurrent lookup
Pipeline
Checksum
Cam
Chaining
DictSt
Cell
Cell
Cell
pass, block
Run D
get, ready, done
Run E2
Figure 4. The development and proof strategy for the second system refinements
For this operation to implement a complete pipeline shift, it must act upon all elements of the pipeline. This may be achieved by replicating and interleaving pipeline copies of the operation. It is necessary to store the initial state of the pipeline in a local variable to ensure that each interleaved Update acts on the correct initial value of its predecessor element. Definition 13 Interleaved updates : IUpdate = varcopy : N → Byte | copy = pipe • ||| i : 1..pipesize • (i = 1 ∧ b = copy(i − 1); Update)
2 Now, the local state may be described in terms of a promotion—a local data type with a schema describing its relation to the global state. A local element is simply a Byte, while the global state of the system is a function from natural numbers (the index of each element in the pipeline) to the elements. The update operation changes an individual element to the value of the input variable. Definition 14 Local and global views : PipeCell = [ elem : Byte ] GlobalPipe = [ pipe : N → PipeCell ] UpdateCell = [ ΔPipeCell ; b? : Byte | elem = b? ]
2 The global view of the system, in terms of the local elements is given by the schema Promote. Definition 15 The promotion schema : Promote ΔGlobalPipe ΔPipeCell i? : N i? ∈ dom pipe θPipeCell = pipe(i?) pipe = pipe ⊕ {i? → θPipeCell }
2
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
351
For this promotion to be factored out of the system, it is necessary that it is free. That is to say there is no global constraint that cannot be expressed locally. If this were not the case, then each local element of state would need to be aware of other local elements of state: something that is not permitted if processes are to be concurrent; theorem 4 captures this. Theorem 5 states that the promoted update operation is equivalent to the single update operation above. Theorem 4 The promotion is free : ∃ PipeCell • ∃ GlobalPipe • Promote ⇒ ∀ PipeCell • ∃ GlobalPipe • Promote Proof prove by reduce; prove 2 Theorem 5 Promoted update is correct : SingleUpdate ⇔ ∃ ΔPipeCell • PipeCell ∧ Promote Proof prove by reduce; 2 4.2. A Local Process Implementing a Cell A single pipeline cell is a process that encapsulates the local data, with an unpromoted input and output action. process Cell = begin PipeCell = [ elem : Byte ] UpdateCell = [ ΔPipeCell ; b? : Byte | elem = b? ] RunC = μX • (in?b → SKIP ||| out !elem → SKIP ||| pass.i!elem → SKIP ); UpdateCell ; block → X • RunC end
Now it seems possible to concurrently compose a number of these cells to form a pipeline using a law such as process indexing of [10]. However, this will be insufficient—this law requires that there is no interference between local processes. This is precisely not the case here, where each process requires the local value of its numeric predecessor in the pipeline— this was the role played by the local variable copy in the definition of Update earlier in this section. Furthermore, only the first input in and the last input out are externally visible. 4.3. Composing Local Processes This is exactly the scenario that is achieved by generalised chaining. The promotion of each cell is the function from local to global state, and the states and local operations are shown to observe the requirement that they are disjoint as the promotion is free. Promoting (and renaming) the in, last out, tick and tock events gives the global action with all the internal communications hidden. The index of each process i is taken from the promotion schema. The operator composes pipesize copies of the process Cell concurrently. A parameter which is given to the operator is a pair of events that are to chain—to synchronise—events in numerically adjacent processes, these pairs of events are uniquely renamed and placed in the synchronisation sets of adjacent processes accordingly. This synchronisation can be used to communicate the initial value of one cell to its neighbouring cell. Internal events are hidden. The ripple effect of nearest neighbour communication ensures that the inputs and outputs are ordered correctly. This construction is exampled for a pipeline of three cells in figure 5.
352
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
Concrete pipeline Cell in
in
out
Cell m1
in
out
Cell m2
in
out
out
{| tick, tock |}
Figure 5. The process [in ↔ out , {|tick , tock |}] 3 • Cell
Definition 16 The pipeline implementation : Pipeline = [in ↔ out , {|block |}]pipesize • Cell
2 Theorem 6 The implementation of the pipeline is correct PipelineA1 P Pipeline Proof assert PipelineA1 FD Pipeline 2 4.4. Modelling the Local Processes as Handel-C Variables The description of a single cell is very close to Handel-C. The remaining task is to include the model of the clock. Assignments take place on a rising clock edge: this can be modelled using an event tick . All assignments in all processes must happen before a clock cycle completes, and this can be modelled using an event tock . In adapting the description of a cell to behave as a Handel-C process this clock model must be included. It is trivial in this case: each process performs a single assignment after its communications, therefore each iteration in RunC 1 is a single clock cycle. The event block ensured the synchronous behaviour of pipeline cells on each iteration: this function is now achieved by the clock, so block may be dropped.4 This description of a clocked cell is now only a syntactic step away from the implementation of each cell as a simple variable. Definition 17 A Handel-C model of Cell implementation : process ClockedCell = begin PipeCell = [ elem : Byte ] UpdateCell = [ ΔPipeCell ; b? : Byte | elem = b? ] RunC 1 = μX • (in?b → SKIP ||| out !elem → SKIP ||| pass.i!elem → SKIP ); tick → UpdateCell ; tock → X • RunC 1 end
2 4
Alternatively, this step could be regarded as renaming block to tock and including the tick event to globally synchronise and separate communications from state updates.
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter old 1
old 0
0
18
353
19
checksum calculation
checksum
Figure 6. The checksum calculation
Adapting the chained processes is trivial: all instances share the same clock, and this is achieved by placing the events in the global (shared) synchronisation set. Definition 18 The clocked Handel-C pipeline implementation : ClockedPipeline = [in ↔ out , {|tick , tock |}] pipesize • ClockedCell
2 In the above definition, the clock may be hidden if no further processes are to be introduced to the Handel-C implementation, or if they use a separate clock; however it is left visible in this implementation as the CAM will share the same global clock. Theorem 6 states that the clocked implementation is correct. Theorem 7 The implementation of the clocked pipeline is correct Pipeline \ {|block |} P ClockedPipeline \ {|tick , tock |} Proof assert Pipeline \ {|block |} FD ClockedPipeline \ {|tick , tock |}
2 5. Separating the Checksum and the CAM The checksum is a ones complement sum of 16 bit segments of the packet header, and is a standard checksum used in IP packet identification. If the checksum calculation returns the same result as that contained within the header, and several other sanity checks also hold then the state of the pipeline represents an IP packet header. The source and destination addresses in the pipeline are stored for subsequent inspection. The next stage of development is to separate out the checksum from the process that performs this inspection on the addresses. Definition 19 Adding the partner action : RunB 2 = var h : (Addr × Addr )... • μ X • ...; if c < pipesize − 2 then ...; X else (... ||| ready → match!(h ∈ dict) → done → SKIP ); X |[{|ready, match, done|}]| μ Y • ready → match!(h ∈ dict) → done → Y
2
354
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
The first steps in separating the checksum and the CAM follow a similar pattern to before. A match is prefixed with ready and followed with done. A new second action agrees to this ready-match-done cycle. As before, match may now be dropped from the original. This is shown in definition 19; some parts of the definition not relevant to this design step have been abbreviated. If RunB 2 is to be split, it must not share the global variable h. The main tool for removing such dependencies is the introduction of a new event to communicate state: get is introduced for this purpose. Now restricting the scope of h (and of copy and c to the first action) is trivial, and RunB 2 may be rewritten as two separate actions RunD and RunE . Definition 20 The checksum action : RunD = var copy : N → Byte | dom copy = 1..pipesize; c : N; μ X • (||| i : dom copy • pass.i?copy(i) → SKIP ); if ¬ chk (copy) then block → Check else c := 1; get !addr (copy) → SKIP • μ Y • block → SKIP ; if c < dom copy − 1 then (||| i : dom copy • pass.i?copy(i) → SKIP ); block → c = c + 1; Y else ready → (||| i : dom copy • pass.i?copy(i) → SKIP ); done → block → X
2 Definition 21 The CAM action : RunE = μ X • get ?term → ready → match!(term ∈ dict) → done → X
2 5.1. Splitting the Checksum and CAM The two actions are now of a form that allows the process to be split. The correctness of this development step can be verified by proving the refinement relation holds between the main action of the abstract CAM and checksum process of definition 10 and the newly split actions. Theorem 8 The split actions are a refinement RunB 1 A (RunD |[ {|get, ready, done|} ]| RunE ) \ {|get, ready, done|}
Proof assert RunB 1 FD (RunD |[ {|get, ready, done|} ]| RunE ) \ {|get, ready, done|} 2 Definition 22 The abstract CAM process : process CamE = begin DictSt = [dict : N → (Addr × Addr ) | dom dict = 1..dictsize] RunE = definition 21 • Run end
2
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
355
In the checksum, local variables may be encapsulated as the user state of the process. Definition 23 The checksum implementation : process Checker = begin USt = [copy : N → Byte; c : N | dom copy = 1..pipesize] definition 20 RunD = • RunD end
2 The process monitoring the checksum calculation is now ready for implementation, and may have the program clock introduced. The interleaved pass events are actually emulating read access to the pipeline process, so they must appear before a tick —there is no value to be latched in. Other assignments, such as to the local variable c must be latched between the tick and the tock . The clock now additionally performs the role of making sure that each pass cycle is synchronous with respect to pipeline shifts (as the clock is shared with the pipeline), so block may be dropped. Another less obvious role of the clock comes from the fact that the events ready and done occur exactly # dom copy − 2 clock cycles after a get is issued (counting starts at 1). These events were introduced to ensure that the CAM would output the result at the correct time. As the CAM is to be implemented on the same FPGA, with the same global clock, the assumption that the match output will happen # dom copy − 2 clock cycles after it receives a get can be made. As long as the development of the CAM respects this assumption, ready and done no longer play a significant role in the behaviour of the checksum and the CAM, and may be dropped. This step has not been made as a result of a direct application of a law or of laws: more is said about this in section 6. Definition 24 The clocked checksum action : RunD1 = μX • (||| i : dom copy • pass.i?copy(i) → SKIP ); if chk (copy) then tick → tock → Check else get !addr (copy) → tick → c := 1; tock → SKIP • μ Y • if c < # dom copy − 1 then (||| i : dom copy • pass.i?copy(i) → SKIP ); tick → c = c + 1; tock → Y else (||| i : dom copy • pass.i?copy(i) → SKIP ); tick → tock → X
2 Definition 25 A Handel-C model of the checksum implementation : process ClockedChecker = begin USt = [copy : N → Byte; c : N | dom copy = 1..pipesize] definition 24 RunD1 = • RunD1 end
2 Theorem 9 states that the clocked implementation is correct. Theorem 9 The clocked checksum is correct Checker \ {|ready, done, block |} P ClockedChecker \ {|tick , tock |}
356
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
Proof assert Checker \ {|ready, done, block |} FD ClockedChecker \ {|tick , tock |} 2
6. Implementing the Content Addressable Memory The size of an IP packet header, combined with the clock cycle requirements of the components implemented so far is a useful piece of information in designing the CAM implementation. It allows splitting combinatorial logic over multiple clock cycles (thereby decreasing wall clock time requirements) and a more serial implementation of a CAM, which may re-use comparand registers and reduce area requirements. The Rotated ROM design of [6,3] consists of ROMs of depth 16 bits, and width 2 bits, giving 32 bit words: each one of which corresponds to an address of interest. The search circuitry compares 2 bits at a time, meaning that 16 comparisons are required to compare the search term with a word in the dictionary. The circuitry assigns a value to a flag indicating whether a word matches the search term or not. Figure 8 shows the area costs of this design on a sample FPGA, and the clock speeds attainable for increasing CAM sizes in figure 7: these experimental results confirm that a Rotated ROM CAM of the sizes under consideration will permit the FPGA to be clocked at a sufficiently fast speed to allow the rest of the application to monitor a network in real time. 60 unconstrained speed optimised 50
Circuit speed in MHz
40
30
20
10
0 0
500
1000
1500
2000
Number of words in dictionary
Figure 7. Clock speeds of the Rotated ROM CAM on a Xilinx 40150 FPGA
There are 20 clock cycles available after a packet arrives in the pipeline before the match result must be output. This can be exploited: the comparison in the dictionary can be designed to use many—even all—of these clock cycles, thus reducing combinatorial logic costs and expensive comparators—and the Rotated ROM CAM is an architecture that exploits this. In this section, the abstract CAM of definition 22 is refined into this implementation. 6.1. Making the Dictionary Tight Initially, the definition of the dictionary state was left loose. This section begins by making it tight—given in the replacement definition of DictSt below, with values taken from table
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
357
5000 4500 4000
Number of CLBs
3500 3000 2500 2000 1500 1000 500 0 0
500
1000
1500
2000
Number of words in dictionary
Figure 8. Area costs of the Rotated ROM CAM on a Xilinx 40150 FPGA Direction Source0 Destination0 Source1 Destination1
IP address 163.1.27.192 163.1.27.18 163.1.27.152 163.1.27.162
2x16 bitwise representation 10,10,00,11,00,00,00,01,00,01,10,11,11,00,00,00 10,10,00,11,00,00,00,01,00,01,10,11,00,01,00,10 10,10,00,11,00,00,00,01,00,01,10,11,10,01,10,00 10,10,00,11,00,00,00,01,00,01,10,11,10,10,00,10
Table 1. Example address pairs
1. The state in the schema Dict is the source/destination addresses given as 2×16 BitPair representations. As the dictionary is being implemented in hardware, the values are invariant (statically determined)—there is no need for an initialisation operation, as they never change. User state now contains a dictionary consisting of four concrete addresses, implemented as four BitPair sequences of length 16.5 In implementing the lookup it is tempting to specify this by defining simple equality tests over elements; however, the design goal is directed by area and speed concerns—optimally, a single, comparator for each word in the dictionary. Definition 26 The state of the dictionary : DictSt dict : N → seq BitPair dom dict = 1..4 dict.1 = (1, 0), (1, 0), (0, 0), (1, 1), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0), (0, 1), (1, 0), (1, 1), (1, 1), (0, 0), (0, 0), (0, 0) dict.2 = (1, 0), (1, 0), (0, 0), (1, 1), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0), (0, 1), (1, 0), (1, 1), (0, 0), (0, 1), (0, 0), (1, 0) dict.3 = (1, 0), (1, 0), (0, 0), (1, 1), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0), (0, 1), (1, 0), (1, 1), (1, 0), (0, 1), (1, 0), (0, 0) dict.4 = (1, 0), (1, 0), (0, 0), (1, 1), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0), (0, 1), (1, 0), (1, 1), (1, 0), (1, 0), (0, 0), (1, 0)
2 In the Rotated ROM CAM, local variables are used to maintain the results of a lookup and index iterations. The definition of the main CAM action below reflects this. The local array 5
The replacement definition of BitPair is omitted as it is clear from its usage here.
358
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
result stores the boolean result of comparing each dictionary entry with the search term. The match line outputs a result indicating whether or not any of the comparisons returned true—this is implemented as the disjunction over the elements in the result array. Rather than a simple comparison over each word, it is implemented as an iteration over comparing each BitPair with the relevant corresponding place in the search term, meaning that there are 16 comparisons performed for each word, with the nth comparison for each word being performed in parallel. When a comparison fails, the fact that that dictionary word does not match the search term is recorded. Implementation of a 2-bit comparator is cheap: in this way, the area costs have been reduced, the combinatorial costs have been drastically reduced. The events ready and done ensure that the implementation still meets the constraints that the lookup should be complete before the word has left the pipeline. Definition 27 Sequencing the comparators : RunE 1 = var result : dom dict → B; μ X • get ?term → c := 0; ran result := true; μ Y • if c = 16 then ready → match!(true ∈ ran result) → done → X else ∀ i : dom result • result(i) = result(i) ∧ dict.i(c) = head (term); term := tail (term); c := c + 1; Y
2 The universal quantifier in definition 27 may be expanded—resulting in the conjunction of the set of assignments indexed by i . As these assignments are all disjoint with respect to the state that they evaluate and assign to they may be implemented in an arbitrary order. Definition 28 Concurrent words : RunE 2 = result : dom dict → B; μ X • get ?term → c := 0; ran result := true; μ Y • if c = 16 then ready → match!(true ∈ ran result) → done → X else ||| i : dom result • result(i) = result(i) ∧ dict.i(c) = head (term); term := tail (term); c := c + 1; Y
2 Unlike the other components in this case study, concurrency is not introduced using process splitting—it has been introduced with concurrent assignments to user state. To attempt process splitting is not useful: although each entry in the dictionary and the comparisons are disjoint, the result array is not. To split these processes—and therefore the result array—and collate results using further communications would mean that either the clock cycle constraint is not met, or that the implementation is less serial. Definition 29 The CAM implementation : process Cam = begin DictSt = definition 26 definition 28 RunE 2 = • RunE 2 end
2 As the implementation is to be in Handel-C on the same FPGA as the other components the global clock may now be introduced, in definition 30. The first clock cycle reads in and assigns the value of the search term. The last clock cycle for any given lookup is occupied by outputting the result. In between, there are 16 clock cycles available for the lookup—this is the assumption that was made in the final development step for the clocked checksum process
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
359
in definition 25 that allowed the events ready and done to be dropped. The lookup itself actually consists of a sequence of assignments to the result array, where each element in the array is assigned concurrently. As an assignment in Handel-C is latched in during a clock cycle, each assignment in the sequence is a single clock cycle. As there are 16 assignments, one for each BitPair in an address, there are 16 clock cycles. Although this stage of development can be seen to follow from the properties of hiding, it has not been calculated from a direct application of a law as the hiding does not cleanly distribute through concurrency. It is important that the clock does not block when the process is waiting for input—a communication on get will not happen on every clock cycle. If the extra choice containing tick and tock were not included here, this would be a modelling error that is not apparent in the implementation—in reality, a Handel-C process cannot block the clock from ticking whilst waiting for a communication. However, this modelling error would prevent accurate verification of the implementation. Definition 30 The clocked CAM action : RunE 3 = result : dom dict → B; μ X • tick → tock → X 2 get ?term → tick → c := 0; ran result := true; tock → SKIP ; μ Y • if c = 16 then match!(true ∈ ran result) → tick → tock → X else i : dom result • |[{|tick , tock |} ]| result(i) = result(i) ∧ dict.i(c) = head (term); tick → term := tail (term); c := c + 1; tock → Y
2 Definition 31 The clocked CAM : process ClockedCam = begin DictSt = definition 26 RunE 3 = definition 30 • RunE 3 end
2 Verification of the clocked implementation proves to be interesting. If the CSPM assertion corresponding to theorem 10 is checked, it is found to fail. In this assertion, as with all the other checks of clocked processes, the clock must be hidden to ensure the alphabets of the processes match. Consequently a divergence exists in ClockedCam: where it is waiting for an input it may perform an infinite series of clock ticks and a get never occurs. Verification therefore needs more care: it must ignore this divergence. A simplistic way of achieving this would be to disallow this possibility and re-check; however this may not always be possible for more general examples. Instead, the technique is to show that the divergence did not result from the real activities of the process: it is shown to be divergence free when all events other than tick and tock are hidden. The divergence, therefore, must have come from the clock. If the process can then be shown to be a failures refinement then the implementation is correct. Theorem 10 The CAM implementation is valid Cam \ {|ready, done|} P ClockedCam \ {|tick , tock |} Definition 32 The equivalent CSPM assertions : assert ClockedCam \ {|get , match|} : divergence free Cam \ {|ready, done|} FD ClockedCam \ {|tick , tock |}
2
360
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter Pipeline
Checksum
Cam
Chaining
DictSt
Cell
Cell
Cell
pass, block
Run D
FDR Clock introduction
get, ready, done
FDR Clock introduction Clock divergence checking
ClockedPipeline
Clocked Checksum
Chaining
Cell
Cell
Cell
pass, tick,tock
Run D1
Run E2 FDR Clock introductio
ClockedCam DictSt get, tick, tock
Run E3
Figure 9. The development and proof strategy for the final clocked system refinements
7. The Final Implementation The final implementation is the parallel combination of each of the processes developed. This is given in definition 33. Definition 33 The final clocked implementation : process ClockedFilter = ( ( ClockedPipeline |[ {|tick , tock , pass|} ]| ClockedChecker ) \ {|pass|} |[{|get , done, tick , tock |}]| ClockedCam ) \ {|get , done, tick , tock |}
2 Theorem 11 The Handel-C implementation is correct Filter P ClockedFilter \ {|tick , tock |}
Proof From the correctness of each stage of development and monotonicity of refinement.2 From the development strategy of figure 3, figure 4, figure 9, and monotonicity of refinement, confidence in the correctness of the implementation is assured. Generally industrial developments are too large to model-check, and monotonicity must be relied upon. 7.1. A Final Twist In definition 34, the action BadRunD is presented. This action is included because it highlights an interesting error that may be made in the assumptions. Definition 34 A bad checksum process : BadRunD = μX • (||| i : dom copy • pass.i?copy(i) → SKIP ); if chk (copy) then get !addr (copy) → tick → tock → X else tick → tock → X
2 In the correct checksum process when the calculation returns true the pipeline is not tested again until that packet has left. The above definition evaluates the checksum on each iteration.
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
361
When it returns true, it passes the address to the CAM. If the CAM is in the middle of a lookup it refuses to accept an address. Therefore, if the definition above is used the system will deadlock if the checksum returns true before the CAM has finished the last instructed lookup. This version is not a valid refinement: checking the packet filter using this process results in a counter-example evidencing the deadlock. The error was present in early versions of the application. In hardware, leaving a test condition such as this one permanently running often appears to be a natural, and inconsequential, assumption. The code was extensively tested in simulation and on real networks and the deadlock was not discovered. The assumption about the checksum is that it will not produce a false positive: if it does, it may result in apparent headers overlapping. In reality, the chance of it doing so is extremely remote: otherwise network routers would encounter the problem regularly. In fact routers typically protect against this problem by disregarding rogue packets. This is why testing did not highlight the issue, as the sample data set had been verified by a router. Not only does this highlight the value of the formal development in Circus, but provides an interesting starting point for requirements checking and investigating the real security offered by the device. While real networks may not present the possibility for false positives, formal development has shown that the device does not function if they did actually happen— and this may form the starting point for a malicious attack on the device. 8. Summary The case study began with a high level abstract specification of a network packet filter. Through a series of design steps—each one of which was guided by domain knowledge rather than Circus—an implementation that corresponds to a Handel-C program was calculated. The correctness of each major design step was verified using Z/Eves and FDR. By manipulating the processes into forms applicable to the process splitting law, calculating concurrency in the specification proved to be relatively straightforward; however some of the manipulations—specifically those where assumptions were made about a global clock— relied heavily in places on post-mortem verification. The structure, and rigour, of the development is the most advanced recorded in the the literature for Handel-C and Circus. The intention was to capture the level of rigour and applicability of domain expertise that may be adhered to in an industrial development, and show that this level of rigour is both feasible and sufficient for large projects. This was achieved: in fact, an erroneous assumption in the original design was uncovered that testing alone had not exposed. Due to its simplicity, the implementation has a natural mapping onto Handel-C; although as a formal semantics for Handel-C has not yet been approved6, this final step is not as formal as that of [7] or [1]. Due to the nature of refinement in Circus, some of the traditional problems in Handel-C were naturally avoided: for instance, it should not normally be possible to derive a program where two processes attempt to assign to a variable concurrently. This leads to an interesting artifact in the model: although the Handel-C code may share access to variables—in particular read access—the Circus model may not. An idiom involving regular updates of local state was appealed to in order to emulate this read access. However, in the final code, there is no need to copy the state of the pipeline in the checksum process— it is Circus that requires this. This is clearly an important consideration for hardware area constraints; and is a problem in need of further attention. Decisions about the design of the device, and where and how concurrency was introduced and exploited was governed by domain knowledge and empirical evidence, rather than solely by laws of Circus. The necessity of supporting application of domain knowledge is im6 An item of work we are currently engaged in is a timed model of CSP that matches the timing semantics of Handel-C.
362
A.A. McEwan / A Circus Development and Verification of an Internet Packet Filter
portant. Significant gains in the end product were made by targeting design steps at features of Handel-C and the FPGA. A different correct implementation could have been developed without this knowledge; but it may not have met the speed and area requirements which only become apparent after hardware has been built and tested. Early experiences, gained from empirical experiments, guided these judgements. A method of including wall clock speed, and hardware area, parameters explicitly into the design process may well make for a development method which becomes very cumbersome, and detracts from the elegance of the natural refinement laws. More work is needed to fully consider this. The most significant achievement of this case study is that requirements have been met by drawing on expert domain knowledge; and that the correctness of applying this knowledge has been verified at every stage by drawing on formal techniques. This is a significant demonstration in the applicability of formal techniques to a typical engineering process. The application was compiled and run on a Xilinx 40150 series FPGA which clocked at 20MHz; operating on traffic running at 160M-Bit/s—sufficiently fast to operate as a real time device on standard fast Ethernet. Sample dictionaries of several hundred IP addresses were used on genuine network traffic. The application found and identified the same packets in the stream as standard network monitoring utilities such as snoop. Acknowledgements The author would like to thank Steve Schneider, Jim Woodcock, Ana Cavalcanti, and Wilson Ifill for their technical guidance, assistance and suggestions with this work. References [1] A. L. C. Cavalcanti. A Refinement Calculus for Z. DPhil thesis, The University of Oxford, 1997. [2] S. Kent and R. Atkinson. IP Authentication Header. Technical Report RFC-2401, The Internet Society, November 1998. [3] Alistair McEwan, Jonathan Saul, and Andrew Bailey. A high speed reconfigurable firewall based on parameterizable FPGA based Content Addressable Memories. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, volume 2, pages 1138–1144. CSREA Press, June 1999. [4] Alistair A. McEwan. The design and verification of Handel-C programs. Technical report, Oxford University Computing Laboratory, 2001. Invited talk, DARPA 2001. [5] Alistair A. McEwan. Concurrent program development. DPhil thesis, The University of Oxford, To appear. [6] Alistair A. McEwan and Jonathan Saul. A high speed reconfigurable firewall based on parameterizable fpga-based content addressable memories. The Journal of Supercomputing, 19(1):93–105, May 2001. [7] Carroll Morgan. Programming from Specifications. International Series in Computer Science. PrenticeHall, 1990. [8] Behrooz Parhami. Architectural tradeoffs in the design of VLSI-based associative memories. Journal of Microprocessing and Microprogramming, 38:27–41, 1993. [9] J. Postel. Internet Protocol. Technical Report RFC-791, The Internet Society, September 1981. [10] Augusto Sampaio, Jim Woodcock, and Ana Cavalcanti. Refinement in Circus. In Lars-Henrick Eriksson and Peter Alexander Lindsay, editors, FME 2002: Formal Methods—Getting IT Right, pages 451–470. Springer-Verlag, 2002. [11] Kenneth J. Schultz and P. Glenn Gulak. Architectures for large capacity CAMs. INTEGRATION, the VLSI Journal, 18:151–171, 1995. [12] J. C. P Woodcock and Alistair A. McEwan. An overview of the verification of a Handel-C program. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications. CSREA Press, 2000.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
363
Classification of Programming Errors in Parallel Message Passing Systems Jan B. PEDERSEN University of Nevada, 4505 Maryland Parkway, Las Vegas, Nevada, 89154, USA Tel.: +1 702 895 2557; Fax: +1 702 895 2639
[email protected] Abstract. In this paper we investigate two major topics; firstly, through a survey given to graduate students in a parallel message passing programming class, we categorize the errors they made (and the ways they fixed the bugs) into a number of categories. Secondly, we analyze these answers and provide some insight into how software could be built to aid the development, deployment, and debugging of parallel message passing systems. We draw parallels to similar studies done for sequential programming, and finally show how the idea of multilevel debugging relates to the results from the survey. Keywords. Parallel programming errors, Debugging, Multilevel debugging, Parallel programming
Introduction “If debugging is the process of removing bugs, then programming must be the process of putting them in”; This well known quote from Dijkstra was probably said in jest, but seems to hold some amount of truth. A number of papers have been written about debugging, both for sequential and parallel programming, but many of the debugging systems they describe are not being used by the average programmer. Numerous reasons for this are given, and they include restrictive interfaces, information overload, and wrong level of granularity. They also fail to take into account the types of the errors that occur in parallel message passing programs by primary focusing on well known sequential errors. In other words, the same level of granularity is used for all error types, which we believe is a big mistake. The first step to a solution to this problem is to obtain a better understanding of the type of errors that programmers encounter. To better understand this problem, a class of graduate students at the University of Nevada, Las Vegas, answered a questionnaire about their experiences with programming parallel message passing programs. We also asked them to report every single runtime-error they encountered throughout the semester. Along with each report they submitted information about the cause of the error, how the error was found, and how long it took to find. In this paper we present the analysis of the error reports and the questionnaire along with a number of suggestions for how development environments and in particular debuggers can be developed around these error types. In addition, we argue how multilevel debugging can be a useful new debugging methodology for parallel message passing programs. Thus we propose, as tool developers, that we need to understand the types of errors that our clients (the programmers) make. If we do not understand these errors we cannot provide tools and techniques that will carry a big impact.
364
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
1. Related Work 1.1. The Parallel Programming Domain Parallel programming involves a set of components that must each be considered when developing a parallel system. This set, which we regard as the parallel programming domain, includes, among others, the following aspects of the code: sequential code, interprocess communication, synchronization, and processor utilization. Understanding the issues involved with the components of this domain makes understanding the source and manifestation of errors easier. This understanding is useful for determining the approach needed to efficiently debug parallel programs. In addition, it helps determine where to focus the debugging effort, depending on which component of the domain the programmer looks for errors in. In [1] a four stage model for constructing a parallel program, referred to as PCAM, representing the parallel programming domain, is suggested. The four components are: 1. Partitioning. The computation to be performed and the data which it operates on are decomposed into small tasks. 2. Communication. The communication required to coordinate task execution is determined, and the appropriate communication structures and algorithms are defined. 3. Agglomeration. The task and communication structures defined in the first two stages of a design are evaluated with respect to performance requirements and implementation costs. 4. Mapping. Each task is assigned to a processor in a manner that attempts to satisfy the competing goals of maximizing processor utilization and minimizing communication costs. The two last components, agglomeration and mapping, are mostly concerned with performance issues which, while important, are outside the scope of this paper. For the first two components, partitioning and communication, we propose the following additional breakdown: 1. Algorithmic changes. Many parallel programs begin life as a sequential program. If parallel algorithms are based on, or derived from, existing algorithms and/or programs, then a transformation from the sequential to the parallel domain must occur. The transformation of a sequential program into a parallel program typically consists of inserting message passing calls into the code and changing the existing data layout; for example, shrinking the size of arrays as data is distributed over a number of processes. However, if the sequential algorithm is not suitable for parallel implementation, a new algorithm must be developed. For example, the pipe-and-roll matrix multiplication algorithm [2] does not have a sequential counterpart. 2. Data decomposition. When a program is re-implemented, the data is distributed according to the algorithm being implemented. Whether it is the transformation of a sequential program or an implementation of a parallel algorithm from scratch, data decomposition is a nontrivial task that cannot be ignored when writing parallel programs, as not only correctness, but efficiency also greatly depends on it. 3. Data exchange. As parallel programs consist of a number of concurrently executing processes, the need to explicitly exchange data inevitably arises. This problem does not exist in the sequential world of programming where all the data is available in the process running the sequential program. However, in parallel programs, the need for data exchange is present. On a shared memory machine, the data can be read directly from memory by any process. There is still the problem of synchronized access to shared data to consider, but no sending and receiving of data is needed.
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
365
When working with a cluster of processors, each having a separate memory, message passing becomes necessary. When message passing systems like MPI [3] and PVM [4] are used, the programmer is responsible for a number of different tasks: specifying the correct IDs of the involved processes, packing messages into buffers, using the correct functions to pack the data depending on the type, and assigning tags to the message. In part, the difficulty of using a message passing library like PVM and MPI is the low level of the interface of the message passing system. 4. Protocol specification. The protocol for a parallel system is defined as the content, order, and overall structure of the message passing between communicating processes. Along with the data exchange, the communication protocol of the program is a new concept that has been introduced by parallelizing the algorithm.
Figure 1. The sequential versus the parallel programming domain
Figure 1 shows a stylized representation of a sequential and a parallel program. As shown, a sequential program is depicted as a single box, representing the sequential code of the program. The parallel program is represented as a number of boxes, each consisting of three nested boxes. The innermost of these boxes represents the sequential program that each process in the parallel program executes. The sequential code of the parallel program can either be an adaption of the existing sequential program, or a completely rewritten piece of code. The middle box represents the messages being sent and received in the system (the data exchange), and the outer box represents the protocol that the communicating processes must adhere to. 1.2. The Debugging Process A well known approach to debugging was proposed by Araki, Furukawa and Cheng [5]. They describe debugging as an iterative process of developing hypotheses and verifying or refuting them. They proposed the following four step process: 1. Initial hypothesis set. The programmer creates a hypothesis about the errors in the program, including the locations in the program where errors may occur, as well as a hypothesis about the cause, behaviour, and modifications needed to correct them. 2. Hypothesis set modification. As the debugging task progresses, the hypothesis changes through the generation of new hypotheses, refinement, and the authentication of existing ones. 3. Hypothesis selection. Hypotheses are selected according to certain strategies, such as narrowing the search space and the significance of the error.
366
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
4. Hypothesis verification. The hypothesis is verified or discarded using one or more of the four different techniques: static analysis; dynamic analysis (executing the program); semi-dynamic analysis (hand simulation and symbolic execution) and program modification. If the errors have not been fixed after step four, the process is repeated from step two. In the above model, step four, hypothesis verification, is the focus of our work. Step one can in some situations be automated to a certain degree; examples of such automation include the deadlock detection and correction presented in [6]. 1.3. The Why, How and What of Errors M. Eisenstadt describes in [7] a 3-dimensional space in which sequential errors are placed according to certain criteria. This classification shows some interesting results, which we briefly summarize. 51 programmers were asked to participate in a study in which programming errors are placed into a 3-dimensional space. The 3 dimensions are: 1. Why is the error difficult to find? 2. How is the error found? 3. What is the root cause of the error? For dimension 1 29.4% fell in the category Cause/effect chasm. What makes the errors hard to find is the fact that the symptom of the error is far removed in space and time from the root cause. The second most frequent answer was Tools inapplicable or hampered, which covers the so called ’Heisen bugs’ [8]. It is notable that over 50% of the cases are caused by these two categories. The first category, the cause/effect chasm is greatly amplified in the parallel programming domain, and the second category is, as we have already pointed out, one of the problems we are researching. Dimension 2, concerned with how an error was found; the most frequent answer was Gathering data (53% of answers fell in this category). This category covers the use of print statements, debuggers, break points etc. The second most frequent answer was Inspeculation, which covers hand simulation and thinking about the code. 25.5% of answers fell in this category. An interesting, but not surprising, result is that data gathering (e.g., print statements) and hand simulation account for almost 78% of the techniques reported in locating errors (in Eisenstadt’s study). This result corroborates the result of Pancake [9]: up to 90% of all sequential debugging is done using print statements. While the use of print statements is straightforward when working with sequential programs, their use in parallel programs is often more complicated. Often, processes run on remote processors, which makes redirecting output to the console difficult. Even when output can be redirected to the console, all processes are writing to the same window, thus making the interpretation of the output a challenging task. This is an example of the information overload theory mentioned earlier. Furthermore, the order of the output (i.e., the debugging information from the concurrently executing processes) is not the same for every run, as the processes execute asynchronously and only synchronize through message passing. A possible solution is to have each process write its output to a disk file. However, this introduces the problem of non-flushed file buffers; if a process crashes, the buffer might not be flushed, thus missing output written by the program. Of course this can be solved by inserting calls to flush the I/O buffers, but if these are missing, the programmer ends up spending time on debugging the code he added for debugging purposes! In the worst case this can lead the programmer to believe that the process crashed somewhere between the last print statement that appears in the file, and the first one that does not. A lot of time can then be wasted looking for an error in a place where no error can be found.
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
367
The third dimension, the root cause of the error, contains 9 different categories; the most noteworthy is the most frequent one, Memory, which covers errors such as overwriting a reserved portion of the memory causing the system to crash, and array subscripts out of bounds. 25.5% of answers fell on this category. The second most frequent root cause was faulty hardware (with 17.7%) and in third and fourth place, with 13.7% and 11.8% respectively, came faulty design logic (Algorithmic design/implementation problems) and initialization, which covers wrong types, redefinition of the meaning of system keywords, or incorrectly initialization of a variable. Nearly 50% of the errors are caused by the first two categories. This also perfectly agrees with previous studies where tools and runtime systems are described as a source of errors [9]. The classification used in dimension 3 is a mixture of deep plan analysis [10,11] and phenomenological analysis [12]. Deep plan analysis states that many bugs can be accounted for by analyzing the high level abstract plans underlying specific programs, and by specifying both the possible fates that a plan component may undergo (i.e., missing or misplaced). An alternative phenomenological taxonomy can be found in [12] where the root causes are divided into nine categories. Although all errors essentially trace back to a piece of sequential code that executed on a processor somewhere in the parallel system, we should still consider the errors that occur at conceptually higher levels of the parallel programming domain. By ignoring the higher levels and attempting to use tools from a lower level we often achieve information overload or other problems. Even though a protocol error is caused by sequential code somewhere in the system, such errors are easier found if the level of granularity is that of the protocol. Naturally, it is vital that the tool at this level can map the error back to the sequential code as the correction will have to be made here. (This is one of the main design goals of multilevel debugging) If we accept the decomposition of the parallel programming domain as we stated it above, as well as the overall debugging technique of hypothesis development and verification, we still need to gather information about the error types like Eisenstadt did for sequential errors. This is the study presented in the following sections.
2. The Framework The main goal of this research is to clarify a number of subjects related to parallel programming and debugging of parallel programs. First of all, we wish to obtain some insight into the types of errors the programmers encounter, and secondly obtain data about the techniques they used to locate and correct them. We believe that this information serves as a good basis for how programming and debugging tools for parallel (message passing) programs should be developed. It is important to understand the programming domain (in this situation, the parallel programming domain with message passing) in order to make qualified decisions about how to correct the errors. The subject of error types are useful for a tool developer in a number of ways. First and foremost, if a large percentage of errors are of a certain type, it is important to tailor the tools to assist the user in locating and correcting this type of errors rather than a different type that might not occur as frequently. Secondly, it gives the tool developer an idea of where the errors are located, that is, are most errors in the sequential code, are they related to the data decomposition, the functional decomposition or could they be relate to the use of the message passing API. Such information is invaluable to developers of programming environments as well. It pinpoints the area where the tool has the greatest chance of having an impact on the development cycle. One of the main reasons for this research is a result by Cherri Pancake [13] which states
368
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
that tools for parallel programming/debugging are often only used by their developer. She claims that this is caused by the fact that the tool developer and the tool user might have different foci on what they want/need from a tool. 3. The Error Reports The programmers were asked to submit a small web questionnaire about all the run-time errors they encountered throughout the semester. These were submitted though a simple web interface, and contained just three questions: • Describe the bug. • How did you find/fix it? • How long did it take? We attempt to mimic the study by Eisenstadt as closely as possibly by asking how the error was found and what caused it. There questions are close to dimensions 2 and 3 of Eisenstadt’s questionnaire. 3.1. The Programs In this section we briefly describe the six different programs the programmers wrote throughout the semester. The following list (in no particular order) gives a brief description of the programs • Equation Solver — Using one master and n slave processes to solve a upper triangular system of equations. • Mandelbrot — Using one master and n slaves in a work farm model to compute a Mandelbrot set. • Matrix Multiplication — Implement the Pipe-and-Roll [2] matrix multiplication algorithm. • Partial Sum — Implement a partial sum algorithm that runs in time O(log n). • Pipeline Computation — Using functional decomposition, implement a multistage pipeline with dispersers and collectors that allow for multiple instances of some stages of the computation to achieve a good load balance. • Differential Equation Solver — Solve a differential equation using a discrete method. Depending on the type of the error, we categorize the “Describe the bug” question into seven different categories. We chose seven different categories based on the two first categories in the PCAM model [1], namely partitioning and communication. The partitioning is further subdivided into data decomposition and functional decomposition, and the communication is divided into API usage as well as the three major levels of the parallel programming domain: sequential, message, and protocol. Finally a category for errors (other) that do not fit any other category was added. In more detail, the seven categories we chose are: • Data Decomposition — The root of the bug had to do with the decomposition of the data set from the sequential to the parallel version of the program. • Functional Decomposition — The root of the bug was the decomposition of the functionality when implementing the parallel version of the program. • API Usage — This type of error is associated with the use of the MPI API calls. Typical errors here include passing data of the wrong type or misunderstanding the way the MPI functions work. • Sequential Error — This type of error is the type we know from sequential programs. This includes using = instead of == in tests etc.
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
369
• Message Problem — This type covers sending/receiving the wrong data, that is, it is concerned with the content of the messages, not the entire protocol of the system. • Protocol Problem — This error type is concerned with stray/missing messages that violate the overall communication protocol of the parallel system. • Other — Bugs that do not fit any of the above categories are reported as ‘other’. This include wrong permissions on programs to be spawned, faulty parallel IO etc. We believe that this breakdown will reveal a lot of information about where the bugs are located and where focus should be placed in the development and debugging process. It should be clear that the first 3 items are issues that could be aided in the development process where as the next three should have strong debugging support. This partitioning of course does not rule out development support for message and protocol problems or debugging support for data of functional partitioning. The base for all these programs was either a sequential program (Equation Solver, Mandelbrot, Differential Equation Solver) or a abstract parallel algorithm. The program were to be implemented in C using the MPI [3] message passing interface. 4. The Questionnaire The second part of the Survey was a questionnaire given at the end of the semester. The objectives of this questionnaire were to discover out what the programmers thought was the hardest topic, to learn about their general debugging habits, and to obtain a picture of the type of errors they perceive as being the most frequently encountered. Furthermore, we asked for a wish list with respect to the functionality of development and debugging tools. The survey contained the following 6 questions: 1. Please mark the level of difficulty for each of the following points (1=easy, 5=hard): • • • • 2. 3. 4. 5. 6.
Data decomposition Function decomposition Communication Calls Debugging the code
What do you think is the hardest part of developing a parallel program? List the 3 types of errors you encountered the most. What was your main approach to debugging. What sort of programming support would you find useful (not debugging). What sort of debugging support would you find useful?
The answers to these questions should give an indication of what the programmer perceives to be hard, and when compared to the actual error reports, it will show if their perception of parallel message passing programming is correct. In addition, it will be revealed if the errors they think they get most frequently are indeed the errors they reported. 5. Result of Error Reporting Table 1 summarizes the results of the online error reporting survey and figure 2 shows a graphical representation of the result. As can be seen, 16.77%+8.39% = 25.16% of errors fall in the decomposition categories, and 14.84%+10.32%+24.52% = 49.68% fall into one of the three parts of the decomposition of the PCAM model (all non-decomposition errors should fall into either of these categories! Here we do not count the API usage category). It is notable that almost one quarter of the errors are protocol related. This means that serious support is needed at this level. Approxi-
370
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
Figure 2. The results of the error reports
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
Equation Solver (n1 = 8)
Data Decomp. 2 20.00%
Functional Decomp. 0 0.00%
API Usage 1 12.50%
Sequential Error 0 0.00%
Message Problem 1 12.50%
Protocol Problem 3 37.50%
Other 4 12.50%
Mandelbrot (n2 = 49)
10 20.41%
7 14.29%
11 22.45%
5 10.20%
5 10.20%
7 14.20%
4 8.16%
Matrix Mult.(n3 = 33)
9 27.27%
3 9.09%
6 18.18%
2 6.06%
3 9.09%
10 30.30%
0 0.00%
Partial Sum (n4 = 26)
2 7.69%
2 7.69%
3 11.54%
7 26.92%
3 11.54%
8 30.77%
1 3.85%
Pipeline Comp.(n5 = 4)
0 0.00%
1 25.00%
1 25.00%
1 25.00%
0 0.00%
1 25.00%
0 0.00%
Differential Eq. (n6 = 35) Total (n = 155)
3 8.57% 26 16.77%
0 0.00% 13 8.39%
9 25.71% 31 20.00%
8 22.86% 23 14.84%
4 11.43% 16 10.32%
9 25.71% 38 24.52%
2 5.71 8 5.16%
371
Table 1. Results of the online error reporting survey
mately 15% of the errors are sequential, and if counting the API usage errors we reach a total of almost 45% of errors that a directly linked to faulty sequential code. This strongly suggests that support for sequential debugging is extremely important; Of course such support should not suffer from the problem current tools do (e.g., information overloading and granularity mismatch). As a side note, we believe that it would be reasonable to assume that the 20% of the errors that fell in API Usage category will be reduced as the programmer becomes more familiar with the message passing API. We are certain that many API usage errors could be reduced through the use of a development environment that can aid the programmer in choosing the right values/types for the arguments. 5.1. Debugging Time and Print Statements The third question on the error reporting page asked the programmers to give an estimate of the time it took to locate and correct the bug. The following table shows the result of this question along with the count of how many times the bug was located using regular print statements inserted in the code at strategical places. Program Equation Solver Mandelbrot Matrix Multiplication Partial Sum Pipeline Computation Differential Equation Total
Average Time 43 minutes 45 minutes 47 minutes 63 minutes 28 minutes 61 minutes 52
Print Statements 2 (25.00%) 22 (44.90%) 14 (42.42%) 19 (73.09%) 0 (00.00%) 16 (45.71%) 73 (47.10%)
# Answers 8 49 33 26 4 35 155
Table 2. Average debugging time and the use of print statements
The error reports show a staggering 52 minute average for each of the 155 bugs there were reported. This is a lot higher than we suspected. This is also a good indication that it really is difficult to locate errors and correct them when dealing with a number of processes executing concurrently and communicating asynchronously, especially when not using any
372
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
parallel debugging tools. For comparison, the average time it took to correct the 23 sequential errors reported was 37 minutes per error. Table 2 shows that for certain problems (like the partial sum problem), the debugging task was accomplished by using print statements in 73.09% of the time. The average use of print statements for debugging was 47.10%. For sequential programs it was stated in [13] that 90% of programmers still used print statements as their primary debugging tool. The use of print statements as a primary debugging tool for parallel message passing programs can adversely impact the time the debugging takes. Not only can it be challenging to get the output re-routed to the console, but with a number of processes all using the same console, the output will be interspersed, and the interpretation of the output becomes more challenging. 6. Results of the Questionnaire The first question of the end-of-semester questionnaire asked the students to rate the level of difficulty on a scale from 1 (easy) to 5 (hard) for 4 different topics. The results of this question can be seen in table 3. Data Decomposition 2.92
Functional Decomposition 3.35
Communication Calls 2.31
Debugging 4.04
Table 3. Average level of difficulty for the topics in question 1 (out of 5 possible)
As expected, debugging proved to be the most difficult task associated with writing parallel message passing programs. This again emphasizes the need for techniques and tools for debugging parallel message passing programs. The second question on the survey asked what part of developing a parallel program was regarded the hardest. The answers covered all 4 of the topics from table 3 with approximately 25% answering ‘debugging’. To quote one of the answers: “Debugging, and debugging, and sometimes debugging.” The third question asked for a list of the three kinds of errors encountered most often. The answers that occurred the most frequent were: Processes die prematurely, incorrect data transferred, mismatch between sender and receiver, sequential errors, and incorrect API usage. The second most frequent answers (to question 3) include deadlock, process rank problems, and message tag problems. For this question the answers fall into 3 clear categories: errors in the sequential code, errors associated with message content, and finally errors in the overall communication protocol. We will return to this division later and show why this is an important grouping, and how it has been used to develop a new debugging methodology. The fourth question asked which technique was more frequently used for debugging. Most questionnaires had at least 2 different answers, but a staggering 100% of the questionnaires contained “print statements” as a primary or secondary debugging tool. This of course does not mean that all errors were located by the use of print statements, and reported in the previous section, although not 100% of all errors were caught and corrected using print statements, a large percentage was. The second most frequent answer for debugging approaches was various types of manual code inspection or manual code execution. Both print statements, code inspection and hand simulation are covered by Eisenstadt’s two categories “Inspeculation” and “Gather Data”, which for sequential programs accounted for 25.5% and 53% respectively. That is, 78.5% of all errors in sequential programs were found using techniques in either of these two groups. According to the error report surveys for the parallel programs, 100% of how the error was found fall into one of these categories.
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
373
The last two questions asked what kind of programming and debugging support the programmer would like. Two answers stood out: Integrated programming/development environments and debugging tools specifically tailored to message passing programs, and not just a sequential debugger attached to each process. 7. Conclusions from the Survey The overall result of the end-of-semester questionnaire was that most programmers still debug like they do when writing sequential programs, which unfortunately is primarily by the use of print statements. Furthermore, the errors seemed to fall into one of 3 main categories: Sequential errors, message errors, and protocol errors. There was an overall agreement that (better) IDEs and specialized debugging tools were the most useful programming/debugging support that a programmer could ask for. 7.1. The Error Reports According to table 1, the error-type most often reported was ’Protocol Problem’. Surprisingly, the ’API Usage’ takes second place, followed by ’Data Decomposition’, ’Sequential Errors’ and finally ’Message Problem’, and ’Functional Decomposition’. The 7 error categories can be grouped into two groups: i) Pre-programming/ planning problem, and ii) Actual programming errors. The former covers problems that fall in the data decomposition and functional decomposition categories, and the latter cover sequential errors, message problems, protocol problems and other problems. The first group is more related to the theoretical development/planning of the program as to the actual programming. If the data is laid out wrong or if the functionality has been decomposed incorrectly, no debugging can correct the problem. Thus, the second group contains the actual errors that can be corrected in the program text (i.e., which do not require a revision of the actual parallel algorithm). the API Usage category was intentionally left out of the two categories listed above. We did this for a number of reasons. First of all, one can argue that the problems of using the API will disappear as the programmer gets more familiar with the message passing interface, and thus are not a real threat to programming development. On the other hand, one can argue that they constitute errors that can be fixed in the code, but they should be included in the second category. Often, an incorrect use of a message passing API can be caught and corrected by inspecting content of messages, so for now we chose to leave this category out. 7.2. Program Development A little more than 25% of errors are attributed to problems associated with data or functional decomposition. This indicates that there is a genuine need for tool and/or techniques in this area to help the programmer reduce the number of errors and possibly reduce the development time. This is a challenging problem to solve. One way of reducing errors of this type is to use development environments that support certain parallel patterns; however, it is not always possible to fit an algorithm to a known pattern. Since we are focusing on the debugging issue we will leave this problem as a research topic for the future. 7.3. Program Debugging The remaining 75% of errors (including API Usage) are associated with actual programming errors, that is, they require the programmer to actually debug the code to correct the error that
374
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
causes the problem. The number of debugging tools in existence for parallel message passing programs have been unable to fully embrace a general debugging technique for such programs; in general existing debuggers can be divided into two categories: i) N-version debuggers, which are extensions of sequential debuggers where one debugging process is attached to each process in the parallel system. ii) Debugging environments, which are specialized environments that support debugging by the use of break points and macro stepping. Some of the problem with N-version debuggers is the enormous amount of information that is displayed to the programmer. This often renders the use of this technique useless because of information overload. The problem with both N-version debuggers and the environments is that their primary focus is on the sequential code. Naturally, all errors lead back to the code, but there is no support for higher levels of debugging, this includes errors related to messages or the protocol. Thus, the programmer is left to perform this debugging though an interface that is not meant for it. This makes debugging a very tedious and challenging task. The results of the error-reporting page shows that approximately 15% of errors are sequential errors, so the perfect debugging tool of course must support debugging of the sequential code, but also, as almost 26% of the errors are concerned with messages or protocols, they must be capable of operating at higher levels that include messages, their content, and the overall communication protocol of the program. These observations led us to formulate a new debugging methodologies developed around this break-down of the programming domain. We believe a successful tool must support debugging at the three different levels. At the sequential level it should be easy for the programmer to deploy a sequential debugger/tool or debugging technique related to sequential code without getting information overloading as with the N-version debugging technique. Similarly it should be easy to deploy tools specifically designed to locate and correct errors at the remaining two levels. We refer to this new technique as Multilevel debugging. In the following section we briefly describe the idea behind multilevel debugging.
8. Multilevel Debugging In this section we briefly describe the multilevel debugging methodology. We developed this technique as a potential solution to the shortcomings of the existing debugging tools and techniques. We based the multilevel debugging methodology on the decomposition of the PCAM model as described earlier; this resulted in a bottom-up approach rather than the conventional top-down approach, which had proved fruitless because of problems such as information overloading. Some of the basic ideas behind multilevel debugging are: • The information about messages and their content, as well as information about the protocol should be extracted from the running program and used when debugging errors at these higher levels. • Support for mapping the manifestation of the error (the effect) back to the actual code that caused it should be provided. • Strong support of sequential debugging of separate processes without causing information overloading must be provided. • Automation of debugging tasks should be done when ever possible. Example of this include deadlock detection and correction. Some of the general goals for multilevel debugging include • Computable relations should be computed on request not left for the user to figure out.
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
375
• Displayable state should be displayed on request, not left for the user to draw or visualize. • Views for important parts of the program (key players) other than variables should be available. • Navigation tools tailored to specific tasks/levels should be available. We have described a number of tools and techniques from the multilevel debugging framework in [14,6,15]. A complete reference can be found in [16] 9. Conclusion We have presented the results of 2 different surveys, and the result of these have shown that errors fit into 3 distinct categories: Sequential errors, message errors, and protocol errors. In addition, the survey showed that debugging by using print statements is still frequently used, and extremely time consuming. We introduced a new debugging technique referred to as multilevel debugging, which decomposes the debugging task into three categories: sequential, message, and protocol. The results of the questionnaires and error reports seem to support the decomposition of the PCAM model chosen for multilevel debugging; Since multi level debugging include tools that are tailored to the levels associated with messages/message content and the protocol of the system, we believe that not only will it be easier to locate errors at these levels, but also reduce the time it takes to find and correct the error. Multilevel debugging was initially developed as a new debugging methodology in [16] and implemented in a text based version for PVM programs. A newer version, also for PVM, with a graphical user interface (Millipede) was presented in [17], and an initial version for MPI (IDLI) has been presented in [18]. 10. Future Work We wish to complete the MPI implementation and deploy it to programmers and have them use it and redo the survey to see if the errors were be corrected faster using these new debugging techniques. In addition, as the surveys have shown, 25% of errors are associated with decomposition. It is clear that tools to assist the correct decomposition of data and functionality are needed. Also supporting new tools through the use of recorded data from the program execution remains an interesting area of research. We strongly believe that a message passing library like MPI must incorporate debugging support directly in the library or through a system like Millipede or IDLI. The first step to making debugging ‘easier’ is to facilitate the extraction of necessary information about messages and the protocol from the message passing system itself. References [1] I. Foster. Designing and Building Parallel Programs: Concepts and tools for parallel software engineering. Addison Wesley, 1995. [2] G. Fox, M. Johnson, G. Lyzenga, S. Otto, J. Salmon, and D. Walker. Solving problems on concurrent processors. General techniques and regular problems, volume 1. Prentice Hall International, 1988. [3] J. Dongarra. MPI: A Message Passing Interface Standard. The International Journal of Supercomputers and High Performance Computing, 8:165–184, 1994. [4] A. Geist et al. PVM: Parallel Virtual Machine. A User’s Guide and Tutorial for Networked Parallel Computing. Prentice Hall International, 1994.
376
J.B. Pedersen / Classification of Programming Errors in Parallel Message Passing Systems
[5] K. Araki, Z. Furukawa, and J. Cheng. A General Framework for Debugging. IEEE Software, pages 14–20, May 1991. [6] J. B. Pedersen and A. Wagner. Correcting Errors in Message Passing Systems. In F. Mueller, editor, HighLevel Parallel Programming Models and Supportive Environments, 6th international workshop, HIPS 2001 San Francisco, CA, USA, volume 2026 of Lecture Notes in Computer Science, pages 122–137. Springer Verlag, April 2001. [7] M. Eisenstadt. My hairiest bug war stories. In The Debugging Scandal and What to Do About It Communication of the ACM. ACM Press, April 1997. [8] J. Gray. Why do Computers Stop and What Can be Done About it? Proceedings of 5th Symposium on Reliability in Distributed Software and Database Systems, pages 3–12, January 1986. [9] C. M. Pancake. What Users Need in Parallel Tool Support: Survey Results and Analysis. Technical Report CSTR 94-80-3, Oregon State University, June 1994. [10] W. L. Johnston. An Effective Bug Classification Scheme Must Take the Programmer into Account. Proceedings of the workshop of High-level debugging. Palo Alto, California, 1983. [11] J. C. Spohrer, E. Soloway, and E. Pope. A Goal/Plan Analysis of Buggy Pascal Programs. Humancomputer Interaction, 1(2):163–207, 1985. [12] D. E. Knuth. The Errors of TEX. Software - Practise and Experience, 19(7):607–685, July 1989. [13] C. M. Pancake. Why Is There Such a Mis-Match between User Need and Parallel Tool Production? Keynote address, 1993 Workshop on Parallel Computing Systems: A Dialog between Users and Developers, April 1993. [14] J. B. Pedersen and A. Wagner. Sequential Debugging of Parallel Programs. In Proceedings of the international conference on communications in computing, CIC’2000. CSREA Press, June 2000. [15] J. B. Pedersen and A. Wagner. Protocol Verification in Millipede. In Communicating Process Architectures 2001. IOS Press, September 2001. [16] Jan Bækgaard Pedersen. MultiLevel Debugging of Parallel Message Passing Programs. PhD thesis, University of British Columbia, 2003. [17] Erik H. Tribou. Millipede: A Graphical Tool for Debugging Distributed Systems with a Multilevel Approach. Master’s thesis, University of Nevada, Las Vegas, August 2005. [18] Hoimonti Basu. Interactive message debugger for parallel message passing programs using lam-mpi. Master’s thesis, University of Nevada, Las Vegas, December 2005.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
377
Compiling CSP Frederick R.M. BARNES Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NF, England
[email protected] Abstract. CSP, Hoare’s Communicating Sequential Processes, is a formal language for specifying, implementing and reasoning about concurrent processes and their interactions. Existing software tools that deal with CSP directly are largely concerned with assisting formal proofs. This paper presents an alternative use for CSP, namely the compilation of CSP systems to executable code. The main motivation for this work is in providing a means to experiment with relatively large CSP systems, possibly consisting millions of concurrent processes — something that is hard to achieve with the tools currently available. Keywords. CSP, compilers, occam-pi, concurrency
Introduction Hoare’s CSP [1,2] is a process algebra used to describe concurrent systems and their interactions. CSP allows for reasoning about a system’s denotational semantics in addition to the relatively straightforward operational semantics. A significant use of CSP remains in formal reasoning, for example, with the occam2 [3] programming language [4,5], and more recently occam-π [6,7]. This paper presents another use for CSP, as a language for compilation to executable code. This differs from CSP tools such as FDR [8], that are designed for formal reasoning — e.g. proving that one system is a refinement of another. The work described here is much less complete in the formal sense; FDR will cater for all possible traces of a CSP system, the code generated by the compiler described here executes an arbitrary trace of the system. That trace may be different each time the program is executed, or the same, consistent with the expected rules of non-determinism. The advantage of the work described here is that it allows relatively large CSP systems to be exercised, since state-space explosion is not an issue. The compiler used by this system is an experimental occam-π compiler, intended to replace the existing modified Inmos occam compiler used by KRoC [9,10]. Although the compiler is designed primarily for occam-π, its structure allows other languages and codegeneration targets to be supported. Section 1 describes the motivation for this work. An overview of the compiler infrastructure is given in section 2, followed by implementation details of the CSP support in section 3. Initial conclusions, including a brief discussion of performance, and details of future work are given in section 4. 1. Motivation The main motivation for this work is to support the investigation and experimentation of large CSP systems, possibly containing millions of concurrent processes. The system also provides
378
F.R.M. Barnes / Compiling CSP
the ability to experiment with CSP’s operational semantics — e.g. introducing prioritised external choice as defined by CSPP [11]. Supporting the implementation also serves to exercise new multi-way synchronisation mechanisms implemented inside the occam-π run-time system, which are based on the techniques described in [12]. 2. The NOCC compiler The majority of the work described here has been implemented inside an experimental occam-π compiler named ‘NOCC’ (new occam-π compiler). Unlike many compilers currently available, NOCC is not centred around a specific language or code-generation target. Instead, NOCC attempts to provide an extensible compiler framework, into which new language and code-generation ‘modules’ can be added. The support for CSP within the compiler grew out of an experimental occam-π extension, allowing specifications of trace patterns (in the form of CSP equations) to be attached to occam-π channel-bundle declarations. This reflects on another feature of the compiler, namely the ability to support multiple source languages in a single input file. Likewise, the compiler is capable of generating multiple output formats, e.g. for mixed hardware/software compilation. Building compilers this way is certainly not a new idea [13], and there has been work on multi-language and multi-target compilers elsewhere. Where this compiler differs from others is in its treatment of concurrency. The occam/CSP concurrency model has been around for some time, yet there have been few compilers capable of taking full advantage of it. Creating systems consisting of millions of interacting concurrent processes places some peculiar requirements on the compiler. The two most important are efficient memory allocation (for processes at run-time) and safety checks (freedom from parallel race-hazard and aliasing errors). Also important is the efficiency of the generated code, particularly where large numbers of processes are involved — a responsibility shared with the run-time system. It appears that many existing compilers fail to meet all these demands simultaneously, though some are heading in that direction. It should be noted that these compilers, by and large, generally target sequential languages, or languages where concurrency has been added as an afterthought. Checking the safety of parallel programs in cases where the language does not rigorously police concurrency can be difficult [14] — as is the case with ‘threads and locks’ models of concurrency (e.g. Java threads [15,16] and POSIX threads [17]). In terms of compiler research, NOCC does not appear to add anything substantial to what is already known about building compilers. However, it is maybe better suited to the compilation of very finely grained parallel programs, compared with other compilers. In some ways, the general structure of NOCC is similar to that of the existing Inmos occam compiler (now heavily modified for occam-π), albeit a somewhat modernised version of it. General information pertaining to NOCC can be found at [18]. It is worth noting that NOCC is a very experimental compiler, and that the C language (in which NOCC is written) might not be the most suitable language for compiler implementation. The style of the compiler code itself is not too dissimilar to aspect-orientation, which has been used successfully in building other compilers [19]. The particular feature common to both is the ability to alter the behaviour of compiler parts at run-time. In aspect-orientation this is done in the language; in NOCC it is handled by changing function-pointers in structures. 2.1. Structure of NOCC Figure 1 shows the structure of the compiler as a series of tree-passes, that transform the parse-tree at each stage. Modules inserted into the compiler may add their own passes, if the
379
F.R.M. Barnes / Compiling CSP
existing structure of the compiler is incapable of supporting something required.
lexer
parser
pre−scope
scope language specific
pre−check
constant propagation
type−check
alias−check
usage−check
definedness −check
fe−trans
language assisted
be−map
name−map
pre−map
be−trans
language to target binding
pre−allocate
allocate
pre−code
code−gen
target specific
front end
back end
Figure 1. Structure of the NOCC compiler
Most of the compiler passes are obviously named. The ‘pre-scope’ pass is used to perform tree rewriting prior to scoping, e.g. for pushing name-nodes up the tree, where those names are declared inside their effective scope. The ‘pre-check’ pass runs prior to the parallelusage, aliasing and definedness checks, and is used to set up tree nodes that require special checking. The ‘fe-trans’ and ‘be-trans’ passes successively simplify the parse tree, making it more suitable for code-generation. The ‘be-trans’ pass is the first point at which target information is available. The ‘name-map’ pass generates nodes representing memory reservations and memory references — e.g. describing the memory requirements of an occam-π ‘INT’ variable on a 32-bit machine. The ‘be-map’ pass is primarily concerned with expression evaluation, handling the allocation of temporary variables where required. The ‘pre-allocate’ and ‘allocate’ passes perform memory allocation proper. For the default virtual transputer ETC [20] target, this includes allocations into workspace (local process stack), vectorspace (large process local objects) and mobilespace (a global memory space). These are statically allocated memories, independent of memory dynamically allocated and managed at run-time. The only code-generator currently supported by the compiler generates an extended ETC code. This is transformed into native code by the existing KRoC tool-chain. Another codegenerator is under development that attempts to generalise all register-based architectures (which would ultimately remove the need for the existing native-code translator). The advantage here is to avoid inefficiencies introduced by the intermediate code. 2.2. Parse Tree Structures Internally, the representation of parse trees is generalised. This greatly aids in the automatic processing of tree-nodes — e.g. when constructing new nodes in the parser, or when rewriting trees as part of compiler transformations. It also allows the set of node types to be modified at run-time, e.g. to support compiler extensions that add their own node types. More importantly, it enables code in one part of the compiler to operate on another part’s tree nodes without explicit knowledge of them. This is substantially different from the existing Inmos occam compiler, where the set of tree nodes is fixed, and the code makes extensive use of ‘switch()’ statements. In terms of data structures, what NOCC ends up with is not entirely unlike inheritance in object orientation, except that the equivalent “class hierarchies”
380
F.R.M. Barnes / Compiling CSP
are setup at run-time. The similarity with aspect orientation is the ability to change these at run-time. Each tree node in the compiler is associated with a particular node-tag. These provide a way of identifying the different tree nodes, e.g. “MCSPSEQ” and “MCSPPAR”. Associated with each node-tag is a node-type, usually applied to a set of related tags. For example, ‘MCSPSEQ’ and ‘MCSPPAR’ are tags of a “mcsp:dopnode” type (dyadic operator). It is this node-type that defines the operations performed on a node in different compiler passes — by having the node type structure contain called function pointers. A compromise between speed and size is made here — the cost of calling a function is mostly constant, but the code called (for a particular node type) will often have to perform a series of if-else tests to determine which node tag is involved (and therefore how the pass should proceed). A constant-cost ‘switch()’ statement cannot be used as node tags are allocated dynamically. Figure 2 shows the C structures used to represent parse-tree nodes, together with their linkage to node-tags and node-types. tnode_t ntdef_t *tag lexfile_t *org_file
ntdef_t char *name int idx
tndef_t char *name int idx
int org_line
tndef_t *ndef
int nsub,nname,nhooks
void **items void **chooks
int flags
compops_t *cops langops_t *lops
langops_t int (*getdescriptor)(tnode_t *, char **) int (*isconst)(tnode_t *) ... int (*codegen_typeaction)(tnode_t *, tnode_t *, codegen_t *) compops_t int (*prescope)(tnode_t **, prescope_t *) ... int (*codegen)(tnode_t *, codegen_t *)
Figure 2. NOCC parse-tree structures
A lot of the node-specific code that sits behind these function pointers is concerned with parse-tree rewriting. To aid these, a tree-rewriting mini-language is being developed within the compiler. By generalising this it becomes possible for source codes to define their own language extensions, together with the tree transformations required to produce compilable parse-trees. In some ways, this is heading towards a compiler that implements a basic feature set, then loads the required language ‘source’ at run-time before compiling code proper; not entirely unlike compilers such as Peri Hankey’s “language machine” [21]. 2.3. Deficiencies With regard to NOCC’s plug-in extensions, there are currently no guarantees (beyond rigorous testing) that two or more extensions will coexist sensibly — even then, testing at compiler-development time may not be an option; e.g. new syntactic structures defined inside source files to be compiled. This issue will be addressed in the future once the compiler supports enough of the occam-π language to be useful in practical applications1; existing work has already investigated this issue [22], but for a compiler written in Scheme. The poten1 At the time of writing, simple programs compile successfully but without many of the checks that the existing compiler applies. Basic tests for channel and variable parallel-usage have been implemented successfully.
F.R.M. Barnes / Compiling CSP
381
tial problems lie not so much with incompatibilities in tree nodes, but with incompatibilities in the language grammar — e.g. interaction between a module that provides multi-variable declarations, “INT a,b:”, and one that provides for initialising declarations, “INT x=42:”. The resulting problem is whether “INT f=3, g=24:” or “INT f,g = 3,24:” would parse, and if so, whether either would have the expected semantics (multiple initialised integer variables). 3. Compiling CSP This section describes the details of generating executable code from CSP expressions, within the NOCC compiler. It also sheds some further light on the details of programming within NOCC. The resulting language supported by the compiler is loosely termed ‘MCSP’ (machine-readable CSP), but is currently incompatible with other machine-readable CSP representations (e.g. that used by FDR). 3.1. Syntax The way in which NOCC operates requires front-end language modules to supply the parser grammar BNF-style or as textual DFAs (deterministic finite automaton, or state machines). This allows the language syntax to be readily changed, particularly within plug-in modules which modify the grammar — or even within source files themselves. The following, for example, describes how CSP fixpoints are represented (and is the literal text fed to the compiler at run-time): mcsp:fixpoint ::= [ 0 @@@ 1 ] [ 1 mcsp:name 2 ] [ 2 @@. 3 ] [ 3 mcsp:process 4 ] [ 4 {<mcsp:fixreduce>} -* ]
This is a DFA representation that will parse “@” followed by a name, “.” and a process. ‘mcsp:fixreduce’ is a reduction rule that specifies how to generate the tree-node result. In this particular case it is a generic reduction rule — a program for a miniature stack-machine that operates on parse-tree nodes: parser_register_grule ("mcsp:fixreduce", parser_decode_grule ("SN0N+N+VC2R-", mcsp.tag_FIXPOINT));
The ‘tag FIXPOINT’ is a reference to a node-tag, defined earlier with: mcsp.tag_FIXPOINT = tnode_newnodetag ("MCSPFIXPOINT", NULL, mcsp.node_SCOPENODE, NTF_NONE);
This further references the ‘node SCOPENODE’ node-type, the compiler structure onto which the MCSP module attaches various functions to handle transformations in the different compiler passes. At the parser top-level, the ‘mcsp:fixpoint’ rule is incorporated statically in the definition of ‘mcsp:process’. However, it could be added dynamically with the BNFstyle rule: mcsp:process +:= mcsp:fixpoint
Using these descriptions, the resulting parser is able to handle fixpoints in the input such as: PROCESS (e) ::= @x.(e -> x)
This represents a process that continually engages on the parameterised event ‘e’, and could form a whole MCSP program. Figure 3 shows the resulting parse-tree structure, prior to scoping.
382
F.R.M. Barnes / Compiling CSP MCSPPROCDECL
mcsp:declnode
MCSPNAME
MCSPNAME
mcsp:rawnamenode "PROCESS"
MCSPFIXPOINT
mcsp:rawnamenode "e"
mcsp:scopenode
MCSPNAME
MCSPTHEN
mcsp:rawnamenode "x"
mcsp:dopnode
MCSPNAME
MCSPNAME
mcsp:rawnamenode "e"
mcsp:rawnamenode "x"
Figure 3. Tree structure after parsing
3.2. Semantics The semantics implemented by the compiler remain faithful to CSP. External choice between two or more offered events is resolved arbitrarily. This allows, for example, the following program to choose ‘e’ over ‘f ’ (if both are ready): PROCESS (e,f) ::= @x.((e -> x) [] (f -> x))
There is a difference concerning the scoping of names. In the short examples seen so far, all events are bound to parameters. This need not be the case, however. Unbound events are automatically inserted into parameter lists by the compiler and collected together at the top-level. In traditional CSP, which does not support parameterised events, the definition of something (e.g. ‘PROCESS’) is entirely in the meta-language — the right-hand-sides of such definitions could be literally substituted without changing the semantics of the system. This is not the case here, for example: FOO ::= e -> SKIP BAR (e) ::= e -> FOO
There are two ‘e’ events in this system — one free and one bound to ‘BAR’. It is equivalent to the system: BAR (f) ::= f -> e -> SKIP
Literal substitution of ‘FOO’ would produce a different system. The two behaviours not currently supported are interleaving and interrupts. Interrupts are not too much of a concern for implementation — event synchronisations in affected processes becomes a choice between the existing event and the interrupt event, followed by the equivalent of a ‘goto’ into the interrupt process. 3.2.1. Interleaving Interleaving gives rise to difficulties in the implementation, but only where ‘shared’ events are involved. For example: PROCESS (e,f,g) ::= (e -> f -> SKIP) ||| (e -> g -> SKIP)
In such a system, either the left-hand-side synchronises on ‘e’ or the right-hand-side does, but they do not synchronise on ‘e’ between themselves. The implementation of events is handled using a new version of the multi-way synchronisation described in [12]. At present,
383
F.R.M. Barnes / Compiling CSP
however, these only support synchronisation between all processes enrolled on the event — providing an implementation for interleaving is complex. A possible solution is to re-write the affected expressions, separating out the affected events. This, however, does not scale well — memory occupancy and run-time cost increase. The solution to be implemented involves a change to the implementation of multi-way events, such that interleaving is explicitly supported. Parallel synchronisation is a wait-forall mechanism; interleaving is wait-for-one. These represent special cases of a more general synchronisation mechanism, wait-for-n. Although CSP provides no model for this (the denotational semantics would be very complex), it does have uses in other applications [23], and is something that could be supported for occam-π. 3.3. Code generation Supporting multiple back-end targets creates some interesting challenges for the compiler. Currently this is handled by abstracting out ‘blocks’ and ‘names’, that describe the run-time memory requirements. The compiler uses this information to allocate variables, parameters and other memory-requiring structures. Figure 4 shows the transformed sub-tree for the ‘FIXPOINT’ node from figure 3, simplified slightly. The new nodes whose names start ‘krocetc’ belong to the KRoC ETC codegenerator within the compiler. The CSP (or occam-π) specific portions of the compiler are only aware of these nodes as back-end blocks, block-references, names or name-references. MCSPILOOP
mcsp:loopnode
MCSPSEQNODE
mcsp:cnode
KROCETCNAME
MCSPSKIP
krocetc:name
mcsp:leafproc
(namehook)
MCSPALT
mcsp:snode
MCSPGUARD
mcsp:guardnode
MROCETCNAMEREF
MCSPSKIP
krocetc:nameref
mcsp:leafproc
Figure 4. Sub-tree structure after name-map
The ‘krocetc:nameref’ node is a reference to the parameterised event involved, which has since been transformed into a ‘krocetc:name’ describing memory requirements and levels of indirection. The ‘krocetc:name’ node left in the tree reserves space for the occamπ style ‘ALT’ (needed by the run-time). Handling these in a language and target independent manner is only partially complete. The back-end definition within the compiler maintains details of the memory required for constructs such as ‘ALT’s and ‘PAR’s, and this is a fixed set. In the future it may be desirable to generalise these features, but so far this has not been necessary. The combination of occam-π and ETC appears to produce a fairly broad set of compiler features, certainly more than sufficient for purely sequential languages, or for languages with more primitive concurrency models (e.g. threads and locks).
384
F.R.M. Barnes / Compiling CSP
The complete resulting parse-tree is much larger than the ones shown here. Inserting the various back-end nodes will often double the size of the existing tree, including namenodes for entities such as a procedure’s return-address, or static-links to higher lexical levels. Furthermore, the top-level process (and main program) generated by the compiler is not the program’s top-level process. It is instead the parallel composition of the program’s top-level process with a newly created ‘environment’ process. It is this environment process which produces the program’s output at run-time, by writing bytes to the KRoC top-level ‘screen’ channel. For example, the earlier system: PROCESS (e) ::= @x.(e -> x)
is transformed by the compiler into the following (illegal) code: PROCESS (e) ::= @x.(e -> x) ENVIRONMENT (out,e) ::= @z.(e -> out!"e\n" -> z) SYSTEM (screen) ::= (PROCESS (k) || ENVIRONMENT (screen,k)) \ {k}
When executed, this system simply outputs a continuous stream of “e”s, each on a new line. Normally such code would be illegal since the ‘out’ event is really an occam channel; the compiler also currently lacks the support to parse the output syntax shown. An explicit syntax and semantics for handling occam-style communication may be added at a later date. The ETC generated by the compiler is not too dissimilar to what might be expected from a comparable occam-π program. The main difference is the use of new multi-way synchronisation instructions, implemented in the run-time kernel (a modified CCSP [24]). Because the CSP parts of the compiler generate code intended for the KRoC run-time, there is no reason why the two cannot be used together within the same program — once the necessary support for multi-way synchronisation has been incorporated into the occam-π code within NOCC. 3.3.1. Other compiler output In addition to the ETC or other code output, the compiler generates a ‘.xlo’ file. This is an XML file that describes the compiled output. These files are read back by the compiler when separately compiled code is incorporated, e.g. through a ‘#USE’ directive. Included in the output is all the information necessary to instance the separately compiled code, typically memory requirements and entry-point names. Optionally included is a digitally signed hash-code of the generated code and related information in the ‘xlo’ file. Public/private key-pairs are used, allowing verification that code was generated by a particular compiler. This will typically be useful in distributed occam-π mobile-agent systems, when a node that receives precompiled code needs to guarantee the origin of that code. 3.4. Run-time support In order to support MCSP programs, multi-way synchronisation instructions have been added to the KRoC run-time. These consist of modified ‘ALT’ start, end and wait instructions plus additional enable-guard and disable-guard instructions. As can be seen in figure 4, single synchronisations are transformed into an ‘ALT’ structure with only one guard — an explicit synchronisation instruction may be added in the future. In order to support the new compiler, various other changes have been made to the translator and run-time. The most significant is the removal of the fixed workspace-pointer adjustment on subroutine calls and returns (the ‘CALL’ and ‘RET’ transputer instructions [25]). As a result, the ‘CALL’ instruction no longer expects to have subroutine parameters on the stack — which would normally be copied into space reserved by the fixed workspace adjust-
F.R.M. Barnes / Compiling CSP
385
ment. The new compiler instead generates ‘CALL’ and ‘RET’ instructions with an extra ‘offset’ operand, that specifies the necessary workspace adjustment. This makes code-generation and the general handling of subroutine calls simpler, and potentially less expensive. 3.5. Supported constructs Table 1 provides an overview of the supported constructs and their MCSP syntax. Although much of traditional CSP is supported, the language lacks features that may be useful from a pragmatic perspective — for example, variables and basic arithmetic. Such features move the system away from traditional CSP and more towards process calculi such as Circus [26]. Additional features currently being considered are covered in section 4.
skip stop chaos divergence event prefix internal choice external choice sequence parallel interleaving hiding fixpoint
CSP SKIP STOP CHAOS div e→P (x → P) (y → Q) (x → P) 2 (y → Q) P o9 Q P Q PQ P \ {a} μ X.P
MCSP SKIP STOP CHAOS DIV e -> P (x -> P) |~| (y -> Q) (x -> P) [] (y -> Q) P; Q P || Q P ||| Q P \ {a} @X.P
Table 1. Supported MCSP constructs and syntax
A restriction is currently placed on the use of the fixpoint operator — it may only be used to specify sequential tail-call recursion, as shown in the previous examples. Effectively, anything that can be transformed into a looping structure. More general recursion using the fixpoint operator will be added at a later stage. 4. Conclusions and future work This paper has described the basic mechanisms by which CSP programs are translated into executable code. Although the work is at an early stage and there is much more to do, it does show promise — particularly for expressibility and performance. In addition to providing a means to compile and execute CSP programs, this work serves as a good exercise for the new occam-π compiler. Importantly, the incremental addition of MCSP language features to the compiler has remained relatively simple. This is not the case for the existing occam-π compiler, where certain optimisations made early on in the compiler’s development2 make adding extensions awkward. For instance, modifying the source language syntax is trivial, as NOCC generates the actual parser from high-level BNF or DFA definitions. 4.1. Performance A standard benchmark for occam-π is ‘commstime’, a cycle of processes that continuously communicate. This has been re-engineered into MCSP, where the processes involved simply 2 The original occam compiler was designed to run in 2 megabytes of memory, which led to a fairly complex optimised implementation.
386
F.R.M. Barnes / Compiling CSP
synchronise, rather than communicate data. To make benchmarking slightly simpler, the proposed sequential-replication addition (section 4.2.2) has already been added. The commstime benchmark, with a sequential ‘delta’, is currently implemented as: PREFIX (in,out) SUCC (in,out) DELTA (in,out1,out2) CONSUME (in,report)
::= ::= ::= ::=
out -> @x.(in -> out -> x) @x.(in -> out -> x) @x.(in -> out1 -> out2 -> x) @x.((;[i=1,1000000] in); report -> x)
SYSTEM (report)
::= ((PREFIX (a, b) || DELTA (b, c, d)) || (SUCC (c, a) || CONSUME (d, report))) \ {a,b,c,d}
When compiled and executed, the program will print ‘report’ on the screen for every million cycles of the benchmark. Measuring the time between printed ‘report’s gives a fairly accurate value for synchronisation time. Each cycle of the benchmark requires 4 complete multi-way synchronisations, or 8 individual multi-way ‘ALT’s. On a 2.4 GHz Pentium-4, the time for a complete multi-way synchronisation is approximately 170 nanoseconds (with each synchronisation involving 2 processes). In fact, this figure is a slight over-estimation — it includes a small fraction of the cost of printing ‘report’, and the loop overheads. More accurate benchmarking will require the ability to handle timers, something which is considered in the following section. When using a parallel delta, the synchronisation cost is increased to around 250 nanoseconds, giving a process startup-shutdown cost of around 160 nanoseconds. 4.2. Planned additions This section describes some of the additions planned for the MCSP language, to be implemented in the NOCC compiler. These are largely concerned with practical additions to the language, rather than semantic updates. It should be noted that some of the planned additions potentially modify the operational semantics. However, they should not modify the denotational semantics — i.e. systems should not behave in an unexpected way. 4.2.1. Alphabetised parallel The current parallel operator ‘||’ builds its alphabet from the intersection of events on either side. This makes it difficult to express two parallel processes that synchronise on some events, but interleave on others. For example: SYSTEM ::= (a -> b -> c -> SKIP) |{a,c}| (a -> b -> c -> SKIP)
4.2.2. Replicated processes Table 2 shows the proposed operators to support replicated processes. In each case, the replicator may be given as ‘[i=n,m]’ for replication of (m − n) + 1 processes, or more simply as ‘[n]’ for n replications. For the former, the name ‘i’ is a read-only integer variable, in scope for the replicated process only. CSP sequential replication parallel replication interleave replication
{i=1..n} P {i=1..n} P
MCSP ;[i=1,n] P ||[i=1,n] P |||[i=1,n] P
Table 2. Proposed replicator constructs
F.R.M. Barnes / Compiling CSP
387
4.2.3. Variables and expressions Including variables and expressions within the language adds an amount of computational power. The three types initially proposed are ‘bool’s, ‘int’s (signed 32-bit integers) and ‘string’s. These should provide enough computational functionality for many MCSP programs, at least those which we currently have in mind. Table 3 shows the proposed additions. These, however, are not consistent with similar CSP functionality described in [2], where input and output of integers is modelled with a set of events — one for each distinct integer. Here there is only a single event and data is transferred between processes. A restriction must be imposed on outputting processes, such that outputs never synchronise with each other — i.e. one output process synchronising with one or more input processes. Interleaving outputs would be permitted, however.
variable declaration and scope assignment process input process output process choice
MCSP name:type P name := expression e?name e!expression if condition then P else Q
Table 3. Proposed variable and expression constructs
For expressions and conditions, we intend to support a ‘standard’ range of operators. For example, addition, subtraction, multiply, divide and remainder for integer expressions; equal, less-than, greater-than or equal and their inverses for integer comparisons. For strings, concatenation and equality tests only. Within the top-level ‘environment’ process, events that are used for communication will input data from the system and display it on the screen. One further consideration, particularly for benchmarking, is a handling of time. Keeping some consistency with occam-π, a possible implementation would be a global ‘timer’ event on which all processes interleave. Input will produce the current time (in microseconds), output will block the process until the specified time is reached. The combination of features considered here should make it possible to write reasonable extensive and entertaining CSP programs — e.g. a visualisation of the classic “dining philosophers” problem. Over time it is likely that we will want to add, modify or entirely remove features from the MCSP language. The NOCC compiler framework should make this a relatively straightforward task, but only time will tell. References [1] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-13-1532715. [2] A.W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall, 1997. ISBN: 0-13-674409-5. [3] Inmos Limited. occam2 Reference Manual. Prentice Hall, 1988. ISBN: 0-13-629312-3. [4] M.H. Goldsmith, A.W. Roscoe, and B.G.O. Scott. Denotational Semantics for occam2, Part 1. In Transputer Communications, volume 1 (2), pages 65–91. Wiley and Sons Ltd., UK, November 1993. [5] M.H. Goldsmith, A.W. Roscoe, and B.G.O. Scott. Denotational Semantics for occam2, Part 2. In Transputer Communications, volume 2 (1), pages 25–67. Wiley and Sons Ltd., UK, March 1994. [6] Frederick R.M. Barnes. Dynamics and Pragmatics for High Performance Concurrency. PhD thesis, University of Kent, June 2003. [7] P.H. Welch and F.R.M. Barnes. Communicating mobile processes: introducing occam-pi. In A.E. Abdallah, C.B. Jones, and J.W. Sanders, editors, 25 Years of CSP, volume 3525 of Lecture Notes in Computer Science, pages 175–210. Springer Verlag, April 2005. [8] Formal Systems (Europe) Ltd., 3, Alfred Street, Oxford. OX1 4EH, UK. FDR2 User Manual, May 2000.
388
F.R.M. Barnes / Compiling CSP
[9] P.H. Welch and D.C. Wood. The Kent Retargetable occam Compiler. In Brian O’Neill, editor, Parallel Processing Developments, Proceedings of WoTUG 19, volume 47 of Concurrent Systems Engineering, pages 143–166, Amsterdam, The Netherlands, March 1996. World occam and Transputer User Group, IOS Press. ISBN: 90-5199-261-0. [10] P.H. Welch, J. Moores, F.R.M. Barnes, and D.C. Wood. The KRoC Home Page, 2000. Available at: http://www.cs.kent.ac.uk/projects/ofa/kroc/. [11] A.E. Lawrence. CSPP and Event Priority. In Alan Chalmers, Majid Mirmehdi, and Henk Muller, editors, Communicating Process Architectures 2001, volume 59 of Concurrent Systems Engineering, pages 67–92, Amsterdam, The Netherlands, September 2001. WoTUG, IOS Press. ISBN: 1-58603-202-X. [12] P.H. Welch, F.R.M. Barnes, and F.A.C. Polack. Communicating Complex Systems. In Proceedings of ICECCS-2006, September 2006. [13] Uwe Schmidt and Reinhard V¨oller. A Multi-Language Compiler System with Automatically Generated Codegenerators. ACM SIGPLAN Notices, 19(6), June 1984. [14] Zehra Sura and Xing Fang and Chi-Leung Wong and Samuel P. Midkiff and Jaejin Lee and David Padua. Compiler techniques for high performance sequentially consistent java programs. In PPoPP ’05: Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, pages 2–13, New York, NY, USA, 2005. ACM Press. [15] B. Joy, J. Gosling, and G. Steele. The Java Language Specification. Addison-Wesley, 1996. ISBN: 0-20-163451-1. [16] D. Lea. Concurrent Programming in Java (Second Edition): Design Principles and Patterns. The Java Series. Addison-Wesley, 1999. section 4.5. [17] International Standards Organization, IEEE. Information Technology – Portable Operating System Interface (POSIX) – Part 1: System Application Program Interface (API) [C Language], 1996. ISO/IEC 9945-1:1996 (E) IEEE Std. 1003.1-1996 (Incorporating ANSI/IEEE Stds. 1003.1-1990, 1003.1b-1993, 1003.1c-1995, and 1003.1i-1995). [18] F.R.M. Barnes. NOCC: A New occam-pi Compiler. URL: http://www.cs.kent.ac.uk/projects/ ofa/nocc/. [19] Xiaoqing Wu and Barrett R. Bryant and Jeff Gray and Suman Roychoudhury and Marjan Mernik. Separation of concerns in compiler development using aspect-orientation. In SAC ’06: Proceedings of the 2006 ACM symposium on Applied computing, pages 1585–1590, New York, NY, USA, 2006. ACM Press. [20] M.D. Poole. Extended Transputer Code - a Target-Independent Representation of Parallel Programs. In P.H. Welch and A.W.P. Bakkers, editors, Architectures, Languages and Patterns for Parallel and Distributed Applications, Proceedings of WoTUG 21, volume 52 of Concurrent Systems Engineering, pages 187–198, Amsterdam, The Netherlands, April 1998. WoTUG, IOS Press. ISBN: 90-5199-391-9. [21] Peri Hankey. The Language Machine. URL: http://languagemachine.sourceforge.net/. [22] Matthew Flatt. Composable and compilable macros:: you want it when? In ICFP ’02: Proceedings of the seventh ACM SIGPLAN international conference on Functional programming, pages 72–83, New York, NY, USA, 2002. ACM Press. [23] Mordechai Ben-Ari. How to solve the Santa Claus problem. Concurrency: Practice and Experience, 10(6):485–496, 1998. [24] J. Moores. CCSP – a Portable CSP-based Run-time System Supporting C and occam. In B.M. Cook, editor, Architectures, Languages and Techniques for Concurrent Systems, volume 57 of Concurrent Systems Engineering series, pages 147–168, Amsterdam, The Netherlands, April 1999. WoTUG, IOS Press. ISBN: 90-5199-480-X. [25] Inmos Limited. The T9000 Transputer Instruction Set Manual. SGS-Thompson Microelectronics, 1993. Document number: 72 TRN 240 01. [26] J.C.P. Woodcock and A.L.C. Cavalcanti. The Semantics of Circus. In ZB 2002: Formal Specification and Development in Z and B, volume 2272 of Lecture Notes in Computer Science, pages 184–203. SpringerVerlag, 2002.
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
389
A Fast Resolution of Choice between Multiway Synchronisations (Invited Talk) Peter H. WELCH Computing Laboratory, University of Kent Canterbury, Kent, CT2 7NF, England
[email protected] Abstract. Communicating processes offer a natural and scaleable architecture for many computational systems: networks-within-networks, explicit dependencies (through “visible plumbing”, i.e. shared events), and explicit independencies (through “air gaps”, i.e. no shared events). CSP is a process algebra enabling the formal specification of such systems and their refinement to executable implementation. It allows the description of complex patterns of synchronisation between component processes and provides semantics sufficiently powerful to capture non-determinism, multiway synchronisation, channel communication, deadlock and divergence. However, programming languages (such as occam-) and libraries (JCSP, CTJ, C++CSP, etc.), offering CSP primitives and operators, have always restricted certain combinations from use. The reasons were solely pragmatic: implementation overheads. The main restriction lies in the set of events that a process may offer … and non-deterministically choose between if more that one becomes available. The constraint is that if one process is choosing between event e and some other events, other processes offering e must do so only in a committed way – i.e. not as part of a choice of their own. The choice can then be resolved with a simple handshake. Hence, only the process on the input side of a channel may offer it within a choice construct (ALT) – an outputting process always commits. Similarly, choice between multiway synchronisations is banned; since symmetry allows no one process the privilege. This is unfortunate since free-wheeling choices between all kinds of event are routinely specified by CSP designers and they must be transformed into systems that meet the constraints. Because any process making such a choice withdraws all bar one of its offers to synchronise as it makes that choice, resolution is usually managed through a 2-phase commit protocol. This introduces extra channels, processes and serious run-time overheads. Without automated tools, that transformation is error prone. The resulting system is expressed at a lower level that is hard to maintain. Maintenance, therefore, takes place at the higher level and the transformations have continually to be re-applied. This talk presents a fast resolution of choice between multiway synchronisation (the most general form of CSP event). It does not involve 2-phase commit logic and its cost is linear in the number of choice events offered by the participants. A formal proof of its correctness (that the resolution is a traces-failures-divergences refinement of the specified CSP) has not been completed, but we are feeling confident. Preliminary bindings of this capability have been built into the JCSP library (version 1.0 rc6) and an experimental (read complete re-write) occam- compiler. This will remove almost all of the constraints in the direct and efficient realisation of CSP designs as executable code. An example of its use in a (very) simple model of blood clotting (from our TUNA project) will be presented. Keywords: fast choice, multiway synchronisation, events, processes, CSP, occam-ʌ.
This page intentionally left blank
391
Communicating Process Architectures 2006 Peter Welch, Jon Kerridge, and Fred Barnes (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press. All rights reserved.
Author Index Allen, A.R. Barnes, F.R.M. Broenink, J.F. Brown, N. Burgin, M. Chalmers, K. Clayton, S. Cook, B. Dimmich, D.J. Faust, O. Happe, H.H. Hilderink, G.H. Jacobsen, C.L. Jadud, M.C. Kerridge, J. Kumar, S.
109, 123 v, 311, 377 151, 179 237, 253 281 31, 41, 59 59 1 215, 269 109, 123 203 297 215, 225, 269 215, 225, 269 v, 31, 41 135
Lehmberg, A.A. McEwan, A.A. Olsen, M.N. Orlic, B. Pedersen, J.B. Ritson, C.G. Romdhani, I. Sampson, A.T. Schweigler, M. Simpson, J. Smith, M.L. Sputh, B.H.C. Stiles, G.S. Teig, Ø. Walker, P. Welch, P.H.
13 339 13 151, 179 363 311 31 77, 311 77 225 281 109, 123 135 331 1 v, 389
This page intentionally left blank