Gabriel Ciobanu
Membrane Computing and Biologically Inspired Process Calculi
“A.I.Cuza” University Press, Ia¸si c Copy...
25 downloads
1401 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Gabriel Ciobanu
Membrane Computing and Biologically Inspired Process Calculi
“A.I.Cuza” University Press, Ia¸si c Copyright Gabriel Ciobanu 2010
Contents Preface
1
Chapter 1. Membrane Systems and Their Semantics 1. Operational Semantics of Membrane Systems 2. Implementing Membrane Systems by Using Maude 3. Register Membranes for Rules with Promoters and Inhibitors 4. Reversibility in Membrane Computing 5. Minimal Parallelism 6. Membrane Transducers
3 12 17 23 30 38 56
Bibliography
81
Chapter 2. Complexity of Membrane Systems 83 1. Computational Complexity of Simple P Systems 83 2. Complexity of Evolution in Maximum Cooperative P Systems 93 3. Evolving by Maximizing the Number of Rules 101 4. Strategies of Using the Rules of a P System in a Maximal Way 108 Bibliography
117
Chapter 3. 1. 2. 3. 4. 5. 6.
Mobile Membranes and Links to Ambients and Brane Calculi Simple Mobile Membranes Enhanced Mobile Membranes Mutual Mobile Membranes Mutual Membranes with Objects on Surface Mobile Membranes with Timers Mobile Membranes and Mobile Ambients
119 123 127 147 157 170 177
Bibliography
215
Chapter 4. Multiset Information Theory and Encodings 1. Multiset Information Theory 2. Data Compression on Multisets. Submultiset-Free Codes 3. Number Encodings and Arithmetics over Multisets 4. Arithmetic Expressions in Membrane Systems 5. Various Encodings of Multisets
217 217 225 234 246 252
Bibliography
263 1
2
CONTENTS
Chapter 5. Modelling Power of Membrane Systems 1. Modeling Cell-Mediated Immunity by Membrane Systems 2. Membrane Description of the Sodium-Potassium Pump 3. Distributed Evolutionary Algorithms Inspired by Membranes 4. Membrane Systems and Distributed Computing 5. Membrane Computing Software Simulators
265 265 283 294 310 325
Bibliography
339
Bibliography
341
Preface The present volume contains the results obtained and published in the last years on membrane computing and biologically inspired process calculi. First chapter is mainly devoted to semantics of membrane computing. The second chapter presents the results related to the complexity of various variants of membrane systems. Several computability results of the mobile membranes are presented in the third chapter. The fourth chapter is dedicated to multisets information theory and encodings of multisets. All these chapters include original approaches and results presented previously in papers published in conferences proceedings and journals. These papers are mentioned at the end of each chapter using them. Thus the volume represents a sort of survey of the research activity in the field of membrane computing and related formalisms. It presents new ideas concerning all these formalisms, new properties and relationships. The emphasis is mainly on the computational properties of the models. The relationships to certain calculi such as mobile ambients and brane calculi are studied. Moreover, the formalisms are used to model and analyze the biological systems. The results presented in this volume were obtained during the last years by working together with many researchers: Oana Agrigoroaiei, Bogdan Aman, Oana Andrei, Daniela Besozzi, Cosmin Bonchi¸s, Rahul Desai, Mihai Gontineac, Wenyuan Guo, Cornel Izba¸sa, Akash Kumar, S.N. Krishna, Solomon Marcus, Dorel Lucanu, Gheorghe P˘aun, Andreas Resios, Gheorghe S ¸ tef˘ anescu, Bogdan Tanas˘a, Daniela Zaharie and many others. I have learned a lot from them, and I express my gratitude for their valuable contribution, and for the nice and fruitful collaboration over the last decade. Special thanks go to Gheorghe P˘aun for initiating the field of membrane computing, for his encouragement and support.
Gabriel Ciobanu Ia¸si, Romania
1
CHAPTER 1
Membrane Systems and Their Semantics Membrane computing is a branch of natural computing which investigates computing models abstracted from the structure and functioning of living cells and from their interaction in tissues or higher order biological structures. Membrane systems are essentially parallel and nondeterministic computing models inspired by the compartments of eukaryotic cells and their biochemical reactions. The structure of the cell is represented by a set of hierarchically embedded regions, each one delimited by a surrounding boundary (called membrane), and all of them contained inside an external special membrane called skin. The molecular species (ions, proteins, etc.) floating inside cellular compartments are represented by multisets of objects described by means of symbols or strings over a given alphabet. The objects can be modified or communicated between adjacent compartments. Chemical reactions are represented by evolution rules which operate on the objects, as well as on the compartmentalized structure (by dissolving, dividing, creating, or moving membranes). A membrane system can perform computations in the following way: starting from an initial configuration which is defined by the multiset of objects initially placed inside the membranes, the system evolves by applying the evolution rules of each membrane in a nondeterministic and maximally parallel manner. A rule is applicable when all the objects which appear in its left hand side are available in the region where the rule is placed. The maximally parallel way of using the rules means that in each step, in each region of the system, a multiset of rules is chosen which is maximal and applicable, namely a multiset of rules such that no further rule can be added to the multiset. A halting configuration is reached when no rule is applicable. The result is represented by the number of objects from a specified membrane. Several variants of membrane systems are inspired by different aspects of living cells (symport-based and antiport-based communication through membranes, catalytic objects, membrane charge, etc.). Their computing power and efficiency have been investigated using the approaches of formal languages, grammars, register machines and complexity theory. Membrane systems (also called P systems) are presented together with many variants and examples in [90]. Several applications of these systems are presented 3
4
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
in [CPP]. An updated bibliography can be found at the web page http: //ppage.psystems.eu. This chapter is mainly devoted to the membrane systems semantics. First we present an abstract syntax of the membrane systems, and we define a structural operational semantics of P systems by means of three sets of inference rules corresponding to maximal parallel rewriting, parallel communication, and parallel dissolving. Then we describe an implementation of the P systems based on the operational semantics, together with some results on its correctness. Using a representation given by register membranes, it is possible to describe the evolution involving rules with promoters and inhibitors. The evolution is expressed in terms of both dynamic and static allocation of resources to rules, and prove that these semantics are equivalent. Dynamic allocation allows translation of the maximal parallel application of membrane rules into sequential rewritings. Minimal parallelism and P transducers represent two related topics also presented in this chapter. Membrane Systems. We follow the presentation of the membrane systems given by Gheorghe P˘aun, the father of the new field of Membrane Computing (MC). The first step is to present the main formal ingredients by abstracting a biological cell. skin 1
regions
2
4
elementary membrane
5
6
3 a3 b 2 c
objects
a → bc b → a2
membranes
rules
As suggested in these pictures, we distinguish the external membrane corresponding to the plasma membrane; it is called the skin membrane. Several membranes can be placed inside the skin membrane, defining the membrane structure. These membranes correspond to the compartments present in a cell around the nucleus, in Golgi apparatus, vesicles, mitochondria, and so on. A membrane without any other membrane inside it is said to be elementary. The membranes are identified by their labels taken from a given set of labels. Usually we use numbers, starting with number 1 assigned to the skin membrane (this is the standard labeling, but the labels can be meaningful “names” associated with the membranes), and the
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
5
labels are assigned in a one-to-one manner to membranes. The hierarchical structure of membranes can be represented by a rooted tree where the root of the tree is associated with the skin membrane and the leaves are associated with the elementary membranes. Directly suggested by the tree representation is the symbolic representation of a membrane structure, by strings of labeled matching parentheses. The membranes of the same level can float around, that is, the tree representing the membrane structure is not oriented; in terms of parentheses expressions, two subexpressions placed at the same level represent the same membrane structure. In the basic variant of P systems, each region contains a multiset of symbol objects which correspond to the chemicals swimming in a solution in a cell compartment. These chemicals are considered here as unstructured, and this is why they are denoted by symbols from a given alphabet. The objects evolve by means of evolution rules, which are also localized within the regions of the membrane structure. There are three main types of rules: (1) multiset-rewriting rules (one calls them, simply, evolution rules), (2) communication rules, and (3) rules for handling membranes. The first type of rules correspond to the chemical reactions possible in the compartments of a cell. They are of the form u → v, where u and v are multisets of objects. However, in order to make the compartments cooperate, we have to move objects across membranes, and for this we add target indications to the objects produced by a rule as above (to the objects from multiset v). These indications are here, in, and out, with the meanings that an object associated with the indication here remains in the same region, one associated with the indication in goes immediately into an adjacent lower membrane, nondeterministically chosen, and out indicates that the object has to exit the membrane, thus becoming an element of the region surrounding it. An example of an evolution rule is ab → (a, here)(b, out)(c, here)(c, in) having target indications associated with the objects produced by rule application. After using this rule in a given region of a membrane structure, one copy of a and one of b are consumed (and so removed from the starting multiset of that region), producing one copy of a, one of b, and two of c. The resulting copy of a remains in the same region, and the same happens with one copy of c (target indications here), while the new copy of b exits the membrane, going to the surrounding region (target indication out), and one of the new copies of c enters one of the child membranes, nondeterministically chosen. If no such child membrane exists (i.e., the membrane with which the rule is associated is elementary), then the indication in cannot be followed and the rule cannot be applied. In turn, if the rule is applied in the skin region, then b goes up into the environment of the system (and it is “lost” there, since it can never come back). In general, the indication here is not specified; an object without an explicit target indication is supposed to remain in the same region where the rule is applied.
6
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
A rule with at least two objects in its left hand side is said to be cooperative; a particular case is that of catalytic rules of the form ca → cv, where c is an object called catalyst which assists the object a to evolve into the multiset v. The rules of the form a → v, where a is a single object, are called noncooperative. The rules can also have the form u → vδ, where δ denotes the action of membrane dissolving: if the rule is applied, then the corresponding membrane disappears and its contents (object and membranes) are going to the surrounding membrane. The rules of the dissolved membrane disappear with the membrane. The skin membrane is never dissolved. The communication of objects through membranes evokes the fact that biological membranes contain various (protein) channels through which the molecules can pass (in a passive way, due to concentration difference, or in an active way, with consumption of energy), in a rather selective manner. The communication of objects from a compartment to a neighboring compartment can be controlled by rules as above, or by some special rules without target indications where the communication is controlled by specific rules (e.g., by symport/antiport rules). The rules are used in the maximally parallel manner, nondeterministically choosing the rules and the objects. Specifically, this means that we assign objects to rules, nondeterministically choosing the objects and the rules until no further assignment is possible. Mathematically stated, we look to the set of rules, and try to find a multiset of rules by assigning multiplicities to rules, with two properties: (i) the multiset of rules is applicable to the multiset of objects available in the respective region (i.e., there are enough objects to apply the rules a number of times as indicated by their multiplicities); and (ii) the multiset is maximal, i.e., no further rule can be added to it (no multiplicity of a rule can be increased, because of the lack of available objects). Thus, an evolution step in a given region consists of finding a maximal applicable multiset of rules, removing from the region all objects specified in the left hand sides of the chosen rules (with multiplicities as indicated by the rules and by the number of times each rule is used), producing the objects from the right hand sides of the rules, and then distributing these objects as indicated by the targets associated with them. After this, if at least one of the rules introduces the dissolving action δ, then the membrane is dissolved and its contents become part of the parent membrane (provided that this membrane was not dissolved at the same time). Formally, a transition P system of degree m ≥ 1 is a construct of the form Π = (O, C, µ, w1 , w2 , . . . , wm , R1 , R2 , . . . , Rm , io ), where: (1) O is the (finite and nonempty) alphabet of objects, (2) C ⊂ O is the set of catalysts, (3) µ is a membrane structure, consisting of m membranes, labeled 1, 2, . . . , m; we say that the membrane structure, and hence the system, is of degree m,
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
7
(4) w1 , w2 , . . . , wm are strings over O representing the multisets of objects present in regions 1, 2, . . . , m of the membrane structure, (5) R1 , R2 , . . . , Rm are finite sets of evolution rules associated with regions 1, 2, . . . , m of the membrane structure, (6) io is either one of the labels 1, 2, . . . , m, and the respective region is the output region of the system, or it is 0, and the result of a computation is collected in the environment of the system.
The rules are of the form u → v or u → vδ, with u ∈ O+ and v ∈ (O×T ar)∗ , where by V ∗ we denote the set of all strings over an alphabet V , including the empty string λ, by V + we denote the set V ∗ − {λ} of all nonempty strings over V , and T ar = {here, in, out}. In their basic variant, membrane systems are synchronous devices, in the sense that a global clock is assumed, which marks the time for all regions of the system. In each time unit a transformation of a configuration (the multisets of objects from the compartments of the system) is called a transition; it takes place by applying the rules in each region in a nondeterministic and maximally parallel manner. This means that the objects to evolve and the rules governing this evolution are chosen in a nondeterministic way, and this choice is “exhaustive” in the sense that, after the choice is made, no rule can be applied in the same evolution step to the remaining objects. A sequence of transitions constitutes a computation. A computation is successful if it halts, reaches a configuration where no rule can be applied to the existing objects, and the output region io still exists in the halting configuration (in the case where io is the label of a membrane, it can be dissolved during the computation, but the computation is then no longer successful). With a successful computation we can associate a result in various ways. If we have an output region specified, and this is an internal region, then we have an internal output: we count the objects present in the output region in the halting configuration and this number is the result of the computation. When we have io = 0, we count the objects which leave the system during the computation, and this is called external output. In both cases the result is a number. If we distinguish among different objects, then we can have as the result a vector of natural numbers. The objects which leave the system can also be arranged in a sequence according to the time when they exit the skin membrane, and in this case the result is a string (if several objects exit at the same time, then all their permutations are accepted as a substring of the result). Note that non-halting computations provide no output (we cannot know when a number is “completely computed” before halting); if the output membrane is dissolved during the computation, then the computation aborts, and no result is obtained (of course, this makes sense only in the case of internal output). According to the nondeterminism of the application of rules, starting from an initial configuration we can get several successful computations, and
8
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
hence several results. Thus, a membrane system computes (or generates) a set of numbers (or a set of vectors of numbers, or a language). The initial goal of membrane computing was to define computability models inspired from the cell biology, and indeed a large part of the investigations in this area was devoted to producing computing devices and examining their computing power, in comparison with the standard models in computability theory, Turing machines and their restricted variants. As it turns out, most of the classes of P systems considered are equal in power to Turing machines. In a rigorous manner, we have to say that they are Turing complete (or computationally complete), but because the proofs are always constructive, starting the constructions used in these proofs from universal Turing machines or from equivalent devices, we obtain universal P systems (able to simulate any other P system of the given type after introducing a “code” of the particular system as an input in the universal one). That is why we speak about universality results and not about computational completeness. All classes of membrane systems are known to be universal. In general, for P systems working with symbol objects, these universality results are proved by simulating computing devices known to be universal, and which either work with numbers or do not essentially use the positional information from strings. This is true/possible for register machines, matrix grammars (in the binary normal form), programmed grammars, regularly controlled grammars, and graph-controlled grammars (but not for arbitrary Chomsky grammars and for Turing machines, which can be used only in the case of string objects). Universality. Several universality results are presented in [90]. We mention only few universality results of particular interest. For a given system Π we denote by N (Π) the set of numbers computed by Π in the above way. The family of sets N (Π) of numbers (we hence use the symbol N ) generated by P systems of a specified type (P ), working with symbol objects (O), having at most m membranes, and using features/ingredients from a given list is denoted by N OPm (list-of-features). If we compute sets of vectors, we write P sOPm (. . . ), with P s coming from “Parikh set.” Regarding the features of the system, cooperative rules are indicated by coo; catalytic rules are indicated by cat, adding that the number of catalysts matters, and hence catr indicates that we use systems with at most r catalysts. When using a priority relation, we write pri. For the actions δ, τ we write simply δ, τ . Endocytosis and exocytosis operations are indicated by endo, exo, respectively. In the case of systems using symport/antiport rules, we have to specify the maximal weight of the used rules, and this is done by writing symp , antiq meaning that symport rules of weight at most p, and antiport rules of weight at most q are allowed. There are many other features with mnemonic notations which we do not mention here. Once again, the monograph [90] is the main reference.
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
9
Specific examples of families of numbers appear in the few universality results which we mention here. In these results, N RE denotes the family of Turing computable sets of numbers; the notation comes from the fact that these numbers are the length sets of recursively enumerable languages generated by Chomsky type-0 grammars or recognized by Turing machines). N RE is also the family of sets of numbers generated/recognized by register machines. When dealing with vectors of numbers, hence with the Parikh images of languages (or with the sets of vectors generated/recognized by register machines), we write P sRE. Here are few examples of universality results in which the number of membranes is rather small. (1) N RE = N OP1 (cat2 ); (2) N RE = N OP3 (sym1 , anti1 ) = N OP3 (sym2 , anti0 ). The computational power is only one of the important issues when defining a new computing model. The other fundamental issue concerns the computing efficiency, the resources used for solving problems (natural computing is especially concerned with this issue). Since membrane systems are parallel computing devices, it is expected that they could solve hard problems in a more efficient manner, and this is confirmed for systems provided with ways for producing an exponential workspace in linear time [90]. Computational power is of interest to theoretical computer science, and computational efficiency is of interest to practical computer science, but neither is of a direct interest to biology. However if a biologist is interested in simulating a cell (and this seems to be a major concern of biology today), then the generality of the model is directly linked to the possibility of algorithmically solving questions about the model. An example: is a given configuration reachable from the initial configuration? Imagine that the initial configuration represents a healthy cell and we are interested in knowing whether a sickness state is ever reached. Then, if both healthy and non-healthy configurations can be reached, the question arises whether we can find the configurations making the difference, and this is again a reachability issue. Therefore both the power and the efficiency are, indirectly, of interest also to biologists. However the immediate concern of biological research is the evolution of biological systems, and not the result of a specific evolution. The direct interest to biology is represented by the the computation/evolution of a membrane system. Although membrane computing was not intended initially to deal with this issue, a series of recent investigations indicate a strong tendency toward considering membrane systems as dynamical systems. Operational Semantics. Operational semantics provides a way of describing rigorously the evolution of a system. Configurations are states of a transition system, and a computation consists of sequences of transitions between configurations terminating (if it terminates) in a final configuration.
10
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
Structural operational semantics (SOS) provides a framework for defining a formal description of a computing system. It is intuitive and flexible, and it becomes more attractive during the years by the developments presented by G.Plotkin [97], G.Kahn [55], and R.Milner [76]. In basic P systems, a computation is regarded as a sequence of parallel applications of rules in various membranes, followed by a communication step and a dissolving step. A SOS of the P systems emphasizes the deductive nature of the membrane computing by describing the transition steps by using a set of inference rules. premises , the evolution of Considering a set R of inference rules of form conclusion a P system can be presented as a deduction tree. A sequence of transition steps represents a computation. A computation is successful if this sequence is finite, namely there is no rule applicable to the objects present in the last committed configuration. In a halting committed configuration, the result of a successful computation is the total number of objects present either in the membrane considered as the output membrane, or in the outer region. In this chapter a structural operational semantics of P systems is defined by means of three sets of inference rules corresponding to maximally parallel rewriting, parallel communication, and parallel dissolving. Based on this formal description, an implementation of the P systems can be derived, together with some results regarding its correctness. Using register membranes, it is possible to describe the evolution of P systems involving rules with promoters and inhibitors. Such an evolution could be expressed in terms of both dynamic and static allocation of resources to rules; the corresponding semantics are proved to be equivalent. Configurations and Transitions. First we present an inductive definition of the membrane structure, the sets of configurations for a P system, and an intuitive definition for the transition systems is given by considering each of the transition steps: maximal parallel rewriting, parallel communication, and parallel dissolving. The operational semantics of P systems is implemented by using the rewriting system called Maude. The relationship between the operational semantics of P systems and Maude rewriting is given by certain operational correspondence results. Let O be a finite alphabet of objects over which we consider the free commutative monoid Oc∗ , whose elements are multisets. The empty multiset is denoted by empty. Objects can be enclosed in messages together with a target indication. We have here messages of typical form (w, here), out messages (w, out), and in messages (w, inL ). For the sake of simplicity, hereinafter we consider that the into one message: Qtarget indication merge Q Q messages with the same i∈I (vi , out) = (w, out), i∈I (vi , here) Q = (w, here), i∈I (vi , inL ) = (w, inL ), with w = i∈I vi , I a non-empty set, and (vi )i∈I a family of multisets over O.
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
11
We use the mappings rules and priority to associate the set of evolution rules and the priority relation to a membrane label: rules(Li ) = Ri , priority(Li ) = ρi , as well as the projections L and w which return from a membrane its label and its current multiset, respectively. The set M(Π) of membranes for a P system Π, and the membrane structures are inductively defined as follows: • if L is a label, and w is a multiset over O ∪ (O × {here}) ∪ (O × {out}) ∪ {δ}, then h L | w i ∈ M(Π); h L | w i is called simple (or elementary) membrane, and it has the structure hi; • if L is a label, w is a multiset over O ∪ (O × {here}) ∪ (O × {inL(Mj ) | j ∈ [n]}) ∪ (O × {out}) ∪ {δ}, M1 , . . . , Mn ∈ M(Π), n ≥ 1, where each membrane Mi has the structure µi , then h L | w ; M1 , . . . , Mn i ∈ M(Π); h L | w ; M1 , . . . , Mn i is called a composite membrane having the structure hµ1 , . . . , µn i. We conventionally suppose the existence of a set of sibling membranes denoted by N U LL such that M, N U LL = M = N U LL, M and h L | w ; N U LL i = h L | w i. The use of N U LL significantly simplifies several definitions and proofs. Let M∗ (Π) be the free commutative monoid generated by M(Π) with the operation ( , ) and the identity element N U LL. We define M+ (Π) as the set of elements from M∗ (Π) without the identity element. Let M+ , N+ range over non-empty sets of sibling membranes, Mi over membranes, M∗ , N∗ range over possibly empty multisets of sibling membranes, and L over labels. The membranes preserve the initial labelling, evolution rules and priority relation among them in all subsequent configurations. Therefore in order to describe a membrane we consider its label and the current multiset of objects together with its structure. A configuration for a P system Π is a skin membrane which has no messages and no dissolving symbol δ, i.e., the multisets of all regions are elements in Oc∗ . We denote by C(Π) the set of configurations for Π. An intermediate configuration is an arbitrary skin membrane in which we may find messages or the dissolving symbol δ. We denote by C # (Π) the set of intermediate configurations. We have C(Π) ⊆ C # (Π). Each P system has an initial configuration which is characterized by the initial multiset of objects for each membrane and the initial membrane structure of the system. For two configurations C1 and C2 of Π, we say that there is a transition from C1 to C2 , and write C1 ⇒ C2 , if the following steps are executed in the given order: (1) maximal parallel rewriting step: each membrane evolves in a maximal parallel manner; (2) parallel communication of objects through membranes, consisting in sending and receiving messages; (3) parallel membrane dissolving, consisting in dissolving the membranes containing δ.
12
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
The last two steps take place only if there are messages or δ symbols resulting from the first step, respectively. If the first step is not possible, then neither are the other two steps; we say that the system has reached a halting configuration. 1. Operational Semantics of Membrane Systems We present shortly an operational semantics of P systems, considering each of the three steps. 1.1. Maximal Parallel Rewriting Step. Here we formally define the mpr maximal parallel rewriting =⇒L for a multiset of objects in one membrane, mpr and we extend it to maximal parallel rewriting =⇒ over several membranes. Some preliminary notions are required. Definition 1.1. The irreducibility property w.r.t. the maximal parallel rewriting relation for multisets of objects, membranes, and for sets of sibling membranes is defined as follows: • a multiset of messages and the dissolving symbol δ are Lirreducible; • a multiset of objects w is L-irreducible iff there are no rules in rules(L) applicable to w with respect to the priority relation priority(L); • a simple membrane h L | w i is mpr-irreducible iff w is Lirreducible; • a non-empty set of sibling membranes M1 , . . . , Mn is mprirreducible iff Mi is mpr-irreducible for every i ∈ [n]; N U LL is mpr-irreducible; • a composite membrane h L | w ; M1 , . . . , Mn i is mpr-irreducible iff w is L-irreducible, and the set of sibling membranes M1 , . . . , Mn is mpr-irreducible. The priority relation is a form of control on the application of rules. In the presence of a priority relation and in the context of a parallel rewriting, no rule of a lower priority can be used during the same evolution step when a rule with a higher priority can be used, even if the two rules do not compete for the same objects. We formalize the conditions imposed by the priority relation on rule applications in the definition below. Definition 1.2. Let M be a membrane labelled by L, and w a multiset of objects. A non-empty multiset R = (u1 → v1 , . . . , un → vn ) of evolution rules is (L, w)-consistent if: -: R ⊆ rules(L), -: w = u1 . . . un z, so each rule r ∈ R is applicable on w, -: (∀r ∈ R, ∀r′ ∈ rules(L)) r′ applicable on w implies (r′ , r) ∈ / priority(L) ( (r1 , r2 ) ∈ priority(L) iff r1 > r2 ), -: (∀r′ , r′′ ∈ R) (r′ , r′′ ) ∈ / priority(L),
1. OPERATIONAL SEMANTICS OF MEMBRANE SYSTEMS
13
-: the dissolving symbol δ has at most one occurrence in the multiset v1 . . . vn . mpr
mpr
Maximal parallel rewriting relations =⇒L and =⇒ are defined by the following inference rules: : For each w = u1 . . . un z ∈ Oc+ such that z is L-irreducible, and (L, w)-consistent rules (u1 → v1 , . . . , un → vn ), (R1 )
mpr
u1 . . . un z =⇒L v1 . . . vn z
: For each w ∈ Oc+ , w′ ∈ (O ∪ M sg(O) ∪ {δ})+ c , and mpr-irreducible M∗ ∈ M∗ (Π), mpr
(R2 )
w =⇒L w′ mpr
h L | w ; M∗ i =⇒ h L | w′ ; M∗ i
: For each L-irreducible w ∈ Oc∗ , and M+ , M+′ ∈ M+ (Π), mpr
(R3 )
M+ =⇒ M+′ mpr
h L | w ; M+ i =⇒ h L | w ; M+′ i
+ ′ : For each w ∈ Oc+ , w′ ∈ (O ∪ M sg(O) ∪ {δ})+ c , M+ , M+ ∈ M (Π), mpr
(R4 )
mpr
w =⇒L w′ , M+ =⇒ M+′ mpr
h L | w ; M+ i =⇒ h L | w′ ; M+′ i
: For each M, M ′ ∈ M(Π), and M+ , M+′ ∈ M+ (Π), mpr
(R5 )
mpr
M =⇒ M ′ , M+ =⇒ M+′ mpr
M, M+ =⇒ M ′ , M+′
: For each M, M ′ ∈ M(Π), and mpr-irreducible M+ ∈ M+ (Π), mpr
(R6 )
M =⇒ M ′ mpr
M, M+ =⇒ M ′ , M+
mpr
We note that =⇒ for simple membranes can be described by rule (R2 ) with M∗ = N U LL. Remark 1.3. M is mpr-irreducible iff there does not exist M ′ such that M =⇒ M ′ . mpr
Proposition 1.4. Let Π be a P system. If C ∈ C(Π) and C ′ ∈ C # (Π) mpr such that C =⇒ C ′ , then C ′ is mpr-irreducible. The proof of Proposition 1.4 follows by structural induction on C. mpr
The formal definition of =⇒ given above corresponds to the intuitive description of the maximal parallelism. The nondeterminism is given by the associativity and commutativity of the concatenation operation over objects used in R1 . The parallelism of the evolution rules in a membrane
14
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS mpr
is also given by R1 : u1 . . . un z =⇒L v1 . . . vn z saying that the rules of the multiset (u1 → v1 , . . . , un → vn ) are applied simultaneously. The fact that the membranes are evolving in parallel is described by rules R3 − R6 . 1.2. Parallel Communication of Objects. We say that a multiset w is here-free/out-free/inL -free if it does not contain any here/out/inL messages, respectively. For w a multiset of objects and messages, we introduce the operations obj, here, out, and inL as follows: obj(w) is obtained from w by removing all messages, empty if w is here-free, here(w) = ′′ if w = w′ (w′′ , here) ∧ w′ is here-free; w empty if w is out-free, out(w) = ′′ if w = w′ (w′′ , out) ∧ w′ is out-free; w empty if w is inL -free, inL (w) = w′′ if w = w′ (w′′ , inL ) ∧ w′ is inL -free. We consider the extension of the operator w (previously defined over membranes) to non-empty sets of sibling membranes by setting w(N U LL) = empty and w(M1 , . . . , Mn ) = w(M1 ) . . . w(Mn ). We recall that the messages with the same target merge in one larger message. Definition 1.5. The tar-irreducibility property for membranes and for sets of sibling membranes is defined as follows: • a simple membrane h L | w i is tar-irreducible iff w is here-free and L 6= Skin ∨ (L=Skin ∧ w out-free); • a non-empty set of sibling membranes M1 , . . . , Mn is tarirreducible iff Mi is tar-irreducible for every i ∈ [n]; N U LL is tar-irreducible; • a composite membrane h L | w ; M1 , . . . , Mn i, n ≥ 1, is tarirreducible iff: w is here-free and inL(Mi ) -free for every i ∈ [n], L 6= Skin ∨ (L = Skin ∧ w is out-free), w(Mi ) is out-free for all i ∈ [n], and the set of sibling membranes M1 , . . . , Mn is tarirreducible. Notation. We treat messages of the form (w′ , here) as a particular communication inside a membrane, and we substitute (w′ , here) by w′ . We denote by w the multiset obtained by replacing (here(w), here) with here(w) in w. For instance, if w = a (bc, here) (d, out) then w = abc (d, out), where here(w) = bc. We note that inL (w) = inL (w), and out(w) = out(w). tar Parallel communication relation =⇒ is defined by the following inference rules: : For each tar-irreducible M∗ ∈ M∗ (Π) and multiset w such that here(w) 6= empty, or L = Skin ∧ out(w) 6= empty, or it exists Mi ∈ M∗ with
1. OPERATIONAL SEMANTICS OF MEMBRANE SYSTEMS
15
inL(Mi ) (w)out(w(Mi )) 6= empty, (C1 )
tar
h L | w ; M∗ i =⇒ h L | w′ ; M∗′ i
where obj(w) out(w(M∗ )) if L = Skin, w′ = obj(w) (out(w), out) out(w(M∗ )) otherwise , and w(Mi′ ) = obj(w(Mi′ )) inL(Mi ) (w), for all Mi ∈ M∗
: For each M1 , . . . , Mn , M1′ , . . . , Mn′ ∈ M+ (Π), and multiset w, tar
(C2 )
M1 , . . . , Mn =⇒ M1′ , . . . , Mn′ tar
h L | w ; M1 , . . . , Mn i =⇒ h L | w′′ ; M1′′ , . . . , Mn′′ i
where if L = Skin, obj(w) out(w(M1′ , . . . , Mn′ )) w′′ = ′ ′ obj(w) (out(w), out) out(w(M1 , . . . , Mn )) otherwise, and each Mi′′ is obtained from Mi′ by replacing its resources with w(Mi′′ ) = obj(w(Mi′ )) inL(Mi′ ) (w), for all i ∈ [n] : For each M, M ′ ∈ M(Π), and tar-irreducible M+ ∈ M+ (Π), tar
(C3 )
M =⇒ M ′ tar
M, M+ =⇒ M ′ , M+
: For each M ∈ M(Π), M+ ∈ M+ (Π), tar
(C4 )
tar
M =⇒ M ′ , M+ =⇒ M+′ tar
M, M+ =⇒ M ′ , M+′
Remark 1.6. M is tar-irreducible iff there does not exist M ′ such that M =⇒ M ′ . tar
Proposition 1.7. Let Π be a P system. If C ∈ C # (Π) with messages tar and C =⇒ C ′ , then C ′ is tar-irreducible. The proof of Proposition 1.7 is by structural induction on C. 1.3. Parallel Membrane Dissolving. If the special symbol δ occurs in the multiset of objects of a membrane labelled by L, that membrane is dissolved, its evolution rules and the associated priority relation are lost, and its contents (objects and membranes) is added to the contents of the surrounding membrane. We say that a multiset w is δ-free if it does not contain the special symbol δ. Definition 1.8. The δ-irreducibility property for membranes and for sets of sibling membranes is defined as follows: • a simple membrane is δ-irreducible iff it has no messages;
16
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
• a non-empty set of sibling membranes M1 , . . . , Mn is δ-irreducible iff every membrane Mi is δ-irreducible, for 1 ≤ i ≤ n; N U LL is δ-irreducible; • a composite membrane h L | w ; M+ i is δ-irreducible iff w has no messages, M+ is δ-irreducible, and w(M+ ) is δ-free. δ
Parallel dissolving relation =⇒ is defined by the following inference rules: : For each M∗ ∈ M∗ (Π), δ-irreducible h L2 | w2 δ ; M∗ i, and label L1 , (D1 )
δ
h L1 | w1 ; h L2 | w2 δ ; M∗ i i =⇒ h L1 | w1 w2 ; M∗ i
: For each M+ ∈ M+ (Π), M∗′ ∈ M∗ (Π), δ-free multiset w2 , multisets w1 , w2′ , and labels L1 , L2 δ
(D2 )
h L2 | w2 ; M+ i =⇒ h L2 | w2′ ; M∗′ i δ
h L1 | w1 ; h L2 | w2 ; M+ i i =⇒ h L1 | w1 ; h L2 | w2′ ; M∗′ i i
: For each M+ ∈ M+ (Π), M∗′ ∈ M∗ (Π), multisets w1 , w2 , w2′ , and labels L1 , L2 δ
(D3 )
h L2 | w2 δ ; M+ i =⇒ h L2 | w2′ δ ; M∗′ i δ
h L1 | w1 ; h L2 | w2 δ ; M+ i i =⇒ h L1 | w1 w2′ ; M∗′ i
: For each M+ ∈ M+ (Π), M∗′ , N∗′ ∈ M∗ (Π), δ-irreducible hL|w; N+ i, and multisets w′ , w′′ , δ
(D4 ) :
h L | w ; M+ i =⇒ h L | w′ ; M∗′ i δ
h L | w ; M+ , N+ i =⇒ h L | w′ ; M∗′ , N+ i δ
(D5 )
δ
h L | w ; M+ i =⇒ h L | ww′ ; M∗′ i h L | w ; N+ i =⇒ h L | ww′′ ; N∗′ i δ
h L | w ; M+ , N+ i =⇒ h L | ww′ w′′ ; M∗′ , N∗′ i
Remark 1.9. M is δ-irreducible iff there does not exist M ′ such that δ M =⇒ M ′ . Proposition 1.10. Let Π be a P system. If C ∈ C # (Π) is tar-irreducible δ
and C =⇒ C ′ , then C ′ is δ-irreducible.
The proof of Proposition 1.10 follows by a structural induction on C. It is worth to note that C ∈ C(Π) iff C is tar-irreducible and δ-irreducible. According to the standard description in membrane computing, a transition step between two configurations C, C ′ ∈ C(Π) is given by: C ⇒ C ′ iff C and C ′ are related by one of the following relations: :
mpr
tar
mpr
δ
mpr
tar
δ
either C =⇒; =⇒ C ′ , or C =⇒; =⇒ C ′ , or C =⇒; =⇒; =⇒ C ′ .
2. IMPLEMENTING MEMBRANE SYSTEMS BY USING MAUDE
17
The three alternatives in defining C ⇒ C ′ are given by the existence of messages and dissolving symbols along the system evolution. Starting from a configuration without messages and dissolving symbols, we apply the “mpr” rules and get an intermediate configuration which is mpr-irreducible; if we have messages, then we apply the “tar” rules and get an intermediate configuration which is tar-irreducible; if we have dissolving symbols, then we apply the dissolving rules and get a configuration which is δ-irreducible. If the last configuration has no messages or dissolving symbols, then we say that the transition relation ⇒ is well-defined as an evolution step between the first and last configurations. Proposition 1.11. The relation ⇒ is well-defined over the entire set C(Π) of configurations. Examples of inference trees, as well as the proofs of the results are presented in [ACL2]. We have shortly presented the operational semantics just to give sense to the implementations of P systems into rewriting logic. 2. Implementing Membrane Systems by Using Maude Generally, by using a rewriting engine called Maude [28], a formal specification of a system can be automatically transformed into an interpreter. Moreover, Maude provides an useful search command, a semi-decision procedure for finding failures of safety properties, and also a model checker. Since the P systems combine the power of parallel rewriting in various locations (compartments) with the power of local and contextual evolution, it is natural to use a rewriting engine and a rewrite theory. Roughly speaking, a rewrite theory is a triple (Σ, E, R), where (Σ, E) is an equational theory used for implementing the deterministic computation (thus (Σ, E) should be terminating and Church-Rosser), and R is a set of rewrite rules used to implement nondeterministic and/or concurrent computations. Therefore we find rewriting logic suitable for implementing the membrane systems. The Web page of the implementations described here can be found at http://thor.info.uaic.ro/~rewps/index.html. A P system consists of a maximal parallel application of the evolution rules according to their priorities (if any), the (repeated) steps of internal evolution, communication, and dissolving. This sequence of steps uses a kind of synchronization. The first challenge is to describe the maximal parallel rewriting, because this is not quite natural for rewriting logic. The reflection property of rewriting logic is used for defining maximally parallel rewriting in [ACL1]. A P system was defined at object level, and its semantics at meta-level. The description at meta-level assumes many additional operations, and therefore the checking and the analysis of such a specification was time consuming. Here we use a different approach. The evolution rules are represented as terms at the object level. This allows to define the operational semantics at the object level and, consequently, the checking and the analysis of the specification is more efficient.
18
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
The second challenge is given by the sequence of internal steps: evolution, communication, and dissolving. The terms denoting membranes are enhanced with colours, and these colours are used by a rewrite engine to choose the appropriate (sub)set of rewriting rules. The definition of semantics for P systems in rewriting logic reveals an interesting aspect: internal evolution, communication, and dissolving inside a complex membrane may interleave. If only main configurations are observable, then the big-step semantics given by a maximal parallel step and the small steps executed by a single machine to simulate the maximal parallel step are behaviourally equivalent. 2.1. Maude Evolution Rules. A P system has a tree like structure with the skin as its root, the composite membranes as its internal nodes and the elementary membranes as its leaves. The order of the children of a node is not important due to the associativity and commutativity properties of the concatenation operation , of membranes. Since Maude is not able to execute parallel transitions required by a P system, we should use a sequential application of the rules. In this sequential process, in order to prevent the case when the result of one rule is used by another, we use a technique of marking the intermediate configurations with colours. This is achieved by using two operators blue and green. The operator blue traverses the tree in a top-down manner, firing the maximal parallel rewriting process in every membrane through the blue operator on Soup terms. The Boolean argument of operation blue is used to show whether a dissolving rule is chosen during the current maximal parallel step in a membrane with dissolving rules. We need this flag because at most one δ symbol is allowed in a membrane. The multiset of objects is divided into two parts during the maximal parallel rewriting process: blue represents the objects available to be “consumed” via evolution rules, while green represents the objects resulted from applying evolution rules over the blue objects (therefore the green objects are not available anymore for the current evolution step). When no more rule can be applied, the remaining blue objects (if any) become green. The operator green traverses the tree in a bottom-up manner as follows: - green multisets from (O ∪ M sg(O) ∪ {δ})∗c merge into one green multiset; - a leaf becomes green if its multiset is green; - a set of sibling subtrees (with the roots sibling nodes) becomes green if each subtree is green; - a subtree becomes entirely green if: (1) the multiset of the root is green; (2) the subtrees determined by the children of the root form a green set of sibling subtrees; (3) there are no messages to be exchanged between the root node and its children.
2. IMPLEMENTING MEMBRANE SYSTEMS BY USING MAUDE
19
In the communication stage of a green subtree, a node can send a message only to its parent or to one of its children. The rules for each direction of communication (out and in) vary on the structure of the destination membrane. In a P system the dissolving process occurs after the end of the communication process. To fulfil this condition we allow dissolving only if there are no messages to be sent. By dissolving the membrane of a node, all of its objects are transferred to the membrane of its parent, the rules are lost, and if it is an internal node, all of its children become children of its parent. The skin membrane is not allowed to be dissolved. The resulting term corresponds to a configuration of P system reachable in one transition step of a P systems starting from a given configuration. More details on the rules implementing the bottom-up traversal are presented in [ACL4]. In the same paper the dynamics of P systems and their implementations using Maude are related. Such a relationship between operational semantics for P systems and the Maude rewriting relation is given by operational correspondence results. Let Π = (O, µ, w1 , . . . , wn , (R1 , ρ1 ), . . . , (Rn , ρn ), i0 ) be a P system having the initial configuration hL1 |w1 ; Mi1 , . . . , Min i, {i1 , . . . , in } ⊆ {2, . . . , n}, with rules(Lj ) = Rj , priority(Lj ) = ρj for all membrane labels Lj , and let ⇒ be the transition relation between two configurations. We associate to Π a rewriting theory R(Π) = (Σ, E, R); Σ is the equational signature defining sorts and operation symbols, E is the set of Σ-equations which includes also the appropriate axioms for the associativity, commutativity and identity attributes of the operators, and R is the set of rewriting rules. Considering −→R(Π) the rewriting relation, we denote by −→+ R(Π) the transitive closure ∗ of −→R(Π) , and by −→R(Π) its reflexive and transitive closure. An encoding function Im : M(Π) → (TΣ,E )Membrane from the set of membranes M(Π) to the ground terms of sort Membrane from the associated rewriting theory is defined by • if M = h L | w i, then Im (M ) = < L | w > • if M = h L | w ; M1 , . . . , Mn i, then Im (M ) = < L | w ; Im (M1 ) ,..., Im (Mn ) > where L is a constant of sort Label, and w is a term of sort Soup. We extend the encoding function Im over non-empty sets of sibling membranes by Im (M1 , . . . , Mk ) = Im (M1 ), . . . , Im (Mk ), k ≥ 2 We also define an encoding function I : C(Π) → (TΣ,E )Configuration from the set of configurations to the ground terms of sort Configuration such that I(C) = {Im (C)}, for every C ∈ C(Π). →
mpr
Proposition 1.12. a) If blue(L, S) −→g reen(S ′ ), then S =⇒L S ′ . mpr → b) Conversely, if S =⇒L S ′ , then there is a rewrite blue(L, S) −→g reen(S ′ ).
20
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
These results are proved by induction on the length of the rewriting (a), and by induction on the number of rules applied in parallel (b). Using similar results which are presented in [ACL4], we finally get the following theorem: Theorem 1.13 (Operational Correspondence). The rewriting relation given by the implementing rules represents a correct and complete implementation of ⇒ defined by the operational semantics. The implementation of the sequential composition using a general rewrite engine like Maude requires some auxiliary operations and verification of conditions. It is possible to define an alternative to the sequential composition, and so getting various granularities (between “small” and ”big”) for the operational semantics for P systems. Replacing some rules, it is possible to relax the strict separation between the internal steps given by the evolution of membranes, communication, and dissolving. If two parent-child membranes finish their internal evolution, then they can communicate without waiting for the other membranes of the system to finish their evolution step. Similarly, if two parent-child membranes finish the communication, then the child may dissolve whenever it has a δ object. In this way, we reveal two forms of correctness of an implementation with respect to the given operational semantics. In a stronger form, we say that an implementation I is faithful if and only if it is defined by three relations mpr , tar , and diss such that I(C) mpr I(C1 ) whenever mpr tar C =⇒ C1 , I(C1 ) tar I(C2 ) whenever C1 =⇒ C2 , and I(C2 ) diss I(C ′ ) δ
whenever C2 =⇒ C ′ , for all configurations C, C ′ and intermediate configurations C1 , C2 . In a weaker form, we say that an implementation I is accurate if and only if it is defined by a relation such that I(C) I(C ′ ) whenever ′ ′ C ⇒ C , for all configurations C, C . In an accurate implementation it is possible to execute parallel transitions of different phases. This fact can increase the potential parallelism of the rewriting implementations of the P systems. On the other hand, a faithful implementation generates a smaller state space. We can exemplify these aspects by using a simple P system hL1 | aa; hL2 | yy; hL3 | vviii, where rules(L1 ) = a → (b, in(L2 )), rules(L2 ) = y → (x, out)(z, in(L3 )), and rules(L3 ) = v → (u, out). We use the Maude command search to generate the whole state space. For the faithful implementation we get 292 states: Maude> search init =>+ C:Configuration . search in EX : init =>+ C:Configuration . Solution 1 (state 262) states: 263 rewrites: 3059 in 41ms cpu (42ms real) C:Configuration --> {< L1 | x x ; < L2 | b b u u ; < L3 | z z > > >} No more solutions.
2. IMPLEMENTING MEMBRANE SYSTEMS BY USING MAUDE
states: 292
21
rewrites: 3420 in 45ms cpu (46ms real)
For the accurate implementation we get 752 states: Maude> search init =>+ C:Configuration . search in EX : init =>+ C:Configuration . Solution 1 (state 731) states: 732 rewrites: 11001 in 90ms cpu (90ms real) C:Configuration --> {< L1 | x x ; < L2 | b b u u ; < L3 | z z > > >} No more solutions. states: 752 rewrites: 11371 in 92ms cpu (92ms real)
Since we have a higher level of parallelism in the accurate implementation, there are more paths from the initial configuration to the unique final one. The additional states are given by configurations where different membranes are in different phases, e.g., L1 evolves, while L2 and L3 communicate. These forms of implementation exhibit different levels of parallelism which can be exploited in analyzing P systems. An accurate implementation can be used to analyze more aspects inspired by biology. Such an implementation is also appropriate when we are interested in speeding up the execution on a parallel machine. On the other hand, if we are interested to investigate only the configurations (states), then it is better to use a faithful implementation. Related Work. Structural operational semantics is an approach originally introduced by Plotkin [97] in which the operational semantics of a programming language or a computational model is specified in a logical way, independent of a machine architecture or implementation details, by means of rules that provide an inductive definition based on the elementary structures of the language or model. Within “structural operational semantics”, two main approaches coexist: • Big-step semantics is also called natural semantics in [55, 82], and evaluation semantics in [46]. In this approach, the main inductive predicate describes the overall result or value of executing a computation, ignoring the intermediate steps. • Small-step semantics is also called structural operational semantics in [82, 97], and computational semantics in [46]. In this approach, the main inductive predicate describes in more detail the execution of individual steps in a computation, with the overall computation roughly corresponding to the transitive closure of such small steps. In general, the small-step style tends to require a greater number of rules that the big-step style, but this is outweighed by the fact that the small-step rules also tend to be simpler. The small-step style facilitates the description of interleaving [79]. The inference rules of P systems provide a big-step operational semantics due to the parallel nature of the model. The big-step operational semantics of P systems can be implemented by using a rewriting engine (Maude), and so we get a small-step operational description. The
22
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
advantages of the implementations in Maude is given by the solid theoretical aspects of the rewriting logic, and by the complex tools available in Maude. By using an efficient implementation of rewriting logic as Maude [28], we can verify various properties of these systems by means of a search command (a semi-decision procedure for finding failures of safety properties), and a Linear Temporal Logic model checker. These achievements are presented in [ACL2]. In [CL], the nature of parallelism and nondeterminism of the membrane systems is expressed in terms of event structures [105], a known formal model using both causality and conflict relations. In event-based models, a system is represented by a set of events (action occurrences) together with some structure on this set, determining the causality relations between the events. The causality between actions is expressed by a partial order, and the nondeterminism is expressed by a conflict relation on actions. The behaviour of an event structure is formalized by associating to it a family of configurations representing sets of events which occur during the executions of the system. A parallel step is simultaneously executing several rules, each of them producing events which end up in the resulting event configuration. These steps are presumably cooperating to achieve a goal, and so they are not totally independent. They synchronize at certain points, and this is reflected in the produced events. In [CL] the authors determine the event structure given by a membrane system. here we present a modular approach of causality in membrane systems, using both string and multiset rewriting. In order to deal with membrane systems, the event structures are extended with notions like maximal concurrent transitions and saturated states with respect to concurrency. The event structure of a membrane system is defined in two steps: first the event structure of a maximal parallel step in membranes is defined, and then it is combined with a communication step. The main result proves that an event structure of a membrane corresponds to its operational semantics. Event structures for communicating membranes are also defined. In [14], the author describes the causal dependencies occurring between the reactions of a P system, investigating the basic properties which are satisfied by such a semantics. The paper [8] defines a semantics of P systems by means of a process algebra. The terms of the algebra are objects, rules or membranes; an equivalence of membranes is defined with respect to the objects which enter/exit the membrane. The semantics is compositional with respect to the inclusion of a membrane in another membrane. This is obtained by considering each object and rule separately, with evolution given by possible contexts in which it may find itself embedded. The paper [61] describes a class of Petri nets suitable for the study of behavioural aspects of membrane systems. Localities are used as an extension of Petri nets in order to describe the compartments defined in P systems, and so leading to a locally maximal concurrency semantics for
3. REGISTER MEMBRANES FOR RULES WITH PROMOTERS AND INHIBITORS 23
Petri nets. Causality is also considered, using information obtained from the Petri net representation of a P system. In [42], the authors reason at the abstract level of networks of cells (including tissue P systems) with static structure. They adapt an implementation point of view, and give a formal definition of the derivation step, the halting condition and the procedure for obtaining the result of a computation. For (tissue) P systems, parameters for rules are employed in order to describe the specific features of the rules.
3. Register Membranes for Rules with Promoters and Inhibitors In this section we present two operational semantics of membrane systems which differ only in the way the maximal parallel application of rules is described. These two operational semantics reflect the fact that resource allocation to rules can be done either statically or dynamically. For membrane systems with promoters and inhibitors dynamical allocation requires the addition of a register to each membrane the system. We define an operational semantics of membrane systems by means of three sets of inference rules corresponding to maximal parallel rewriting, sending messages and dissolving. A minimal set of inference rules is defined, and their behaviour is detailed along with the presentation of the rewriting logic implementation. We use a uniform representation of rather complex membrane systems with promoters and inhibitors, and get a flexible interpreter for them in Maude. The main results provide the correspondence between the operational semantics of dynamic allocation and the rewriting theory. We can associate promoters and inhibitors with a rule u → v, in the form (u → v)|wprom ,¬winhib , with wprom , winhib non-empty multisets of objects. However we do not consider priorities for rules. A rule (u → v)|wprom ,¬winhib associated with a membrane i is applied only if wprom is present and winhib is absent from the region of the membrane i. The promoters and inhibitors of membrane systems formalize the reaction enhancing and reaction prohibiting roles of various substances present in cells. Membrane systems with promoters or with inhibitors provide characterizations of recursively enumerable sets (of vectors of natural numbers) [11]. For a rule (u → v)|wprom ,¬winhib we consider the following notations: lhs(r) = u, rhs(r) = v, promoter(r) = wprom , inhibitor(r) = winhib . These multisets must satisfy some conditions: u, wprom , winhib contain only objects, u contains at least one object and v contains only messages. If a rule r has no associated promoter, we set promoter(r) = empty; if the rule has no associated inhibitor, we also set inhibitor(r) = empty (this convention is used for the sake of uniformity such that we do not have to differentiate between rules with and without inhibitors). We consider multisets as functions, since some conditions are easier to express in this manner.
24
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
3.1. Dynamic Allocation Semantics. In order to give an operational semantics for membrane systems (operational semantics which can easily be transcribed in a rewriting logic implementation), we present a semantics based on applying rules one by one in a nondeterministic manner until there is no applicable rule left. We call this a dynamic allocation semantics. In a maximal parallel evolution step, a rule’s applicability with respect to promoters and inhibitors only depends on the initial content of the membrane. For this reason, when each transition consists of applying only a rule, we need to store somewhere the objects consumed by previous applications of rules. Thus, we consider registers consisting of a multiset of objects which have been consumed previously (in the same maximal parallel rewriting step), to keep track of rule application. In the same manner as in the previous definition, we construct the set Mh (Π) of register membranes: • if M ∈ M(Π) is an elementary membrane and u : O → N then the pair (M, u) ∈ Mh (Π) and is called an elementary register membrane; • if M = hi|w; M1 . . . Mn i is a composite membrane and u, u1 , . . . , un : O → N then (hi|w; (M1 , u1 ) . . . (Mn , un )i , u) ∈ Mh (Π) is called a composite register membrane. Registers are used to ensure that dynamic allocation is correct; they are only used during the rule application stage. We can see a membrane of M(Π) as an equivalence class of register membranes, i.e., obtained by ignoring registers. Namely, we define inductively a relation ≡ on M(Π) as follows: (M, u) ≡ (M, v), ∀u, v : O → N and if H1 ≡ H1′ , . . . , Hn ≡ Hn′ then (hi|w; H1 , . . . Hn i , u) ≡ (hi|w; H1′ , . . . Hn′ i , v). Clearly, ≡ is an equivalence relation. Proposition 1.14. The set Mh (Π)/≡ of equivalence classes is isomorphic with the set M(Π) of membranes of the P system Π.
The bijection is φˆ : Mh (Π)/≡ → M(Π), induced by φ : Mh (Π) → M(Π). which is obtained by considering φ(hi|wi , u) = hi|wi and φ(hi|w; H1 , . . . Hn i , u) = hi|w; φ(H1 ), . . . φ(Hn )i. The states of the following transition system are register membranes. The labels are taken from the set Rules∪{τ }, where Rules = ∪i∈[n] Rules(i), and τ denotes a silent action – namely the evolution of a membrane in which all rewrites take place in the inner membranes while the content of the top membrane stays the same. The following definition gives a mathematical description of what it means for a rule r to be applicable in a membrane M with register u.
Definition 1.15. Consider H ∈ Mh (Π), H = (hi|w; H∗ i , u), and r ∈ Rules(i); H∗ is a (possibly empty) set of register membranes. We say that the pair (H, r) is valid when • lhs(r) ≤ w, promoter(r) ≤ w + u;
3. REGISTER MEMBRANES FOR RULES WITH PROMOTERS AND INHIBITORS 25
• if inhibitor(r) = empty then ∃a ∈ O such that (w + u)(a) < inhibitor(r)(a); • if H∗ is empty then rhs(r)(a, inj ) = 0, ∀j ∈ [m]; • if H∗ = {H1 , . . . , Hn } and j1 , . . . , jn are the labels of H1 , . . . , Hn respectively, then rhs(r)(a, inj ) = 0, ∀j 6∈ {j1 , . . . , jn }. A register membrane H is mpr-irreducible when there is no rule which can be applied in it or in any of the membranes it contains. We define inductively a transition relation Tmpr ⊆ Mh (Π) × Rules ∪ {τ } × Mh (Π) as follows: • if H = (hi|wi , u) is an elementary register membrane and (H, r) is valid, then (seq−elem)
r
(hi|wi , u) → (hi|w − lhs(r) + rhs(r)i , u + lhs(r))
• if H = (hi|w; H1 , . . . , Hn i , u) is a composite register membrane and ∃j ∈ [n] such that Hj is not mpr-irreducible, then l
(silent)
Hj → Hj′ τ (hi|w; H1 , . . . , Hn i , u) → (hi|w; H1′ , . . . , Hn′ i , u)
where Hk′ = Hk , ∀k 6= j and l ∈ Rules ∪ {τ }; • if H = (hi|w; H1 , . . . , Hn i , u) is a composite register membrane such that Hj are mpr-irreducible ∀j ∈ [n] and (H, r) is valid, then (rewrite)
r
(hi|w; H1 , . . . , Hn i , u) → (hi|w − lhs(r) + rhs(r); H1 , . . . , Hn i , u + lhs(r))
Note that the rules seq-elem and rewrite ensure that rules are first applied in elementary membranes until they become irreducible, then in their parents, and so on. We now present two other transition relations Tmsg ⊆ M(Π) × {msg} × M(Π) and Tdiss ⊆ M(Π)×{diss}×M(Π) which express the message passing and the dissolving steps in the evolution of a membrane system. We use the isomorphism from Proposition 1.14 to glue together Tmpr , Tmsg and Tdiss . We denote by Ω all the multisets of objects together with all the multisets of messages which are in a membrane. This means that, for example, multisets of objects are considered to be multisets over Ω with the support included in O. We use promoter(r) = 0Ω to express that a rule r has no promoter; similar for no inhibitor or dissolving symbol. A membrane M evolves to a membrane N when ∗ , where H • (HM , H) ∈ Tmpr M is the register membrane with its register and all the registers of its children equal to 0Ω , φ(HM ) = M , ∗ is the transitive H is a mpr-irreducible register membrane and Tmpr closure of Tmpr or HM is mpr-irreducible and H := HM ;
26
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
• if φ(H) is msg-irreducible then M ′ := φ(H); otherwise, if φ(H) is not msg-irreducible, then there is a unique membrane M ′ such that (φ(H), M ′ ) ∈ Tmsg ; • if M ′ is diss-irreducible then N := M ′ ; otherwise, if M ′ is not dissirreducible, then N is the unique membrane for which (M ′ , N ) ∈ Tdiss . / /H / ... HM O
mpr
mpr
mpr
φ(H)
M
msg
/ M′
diss
/N
In what follows, let w(M ) denote the multiset contained in the membrane M , and L(M ) denote the label of M . We define the following functions over multisets: the cleanup function modifies the multiset by “erasing” objects with messages of the form in child and by “transforming” objects of form (a, here) into objects of form a. The out function collects the objects a from the objects with messages of form (a, out); the function inj is similarly defined. The eraseOut function removes objects with messages of form (a, out) from a multiset; similarly, eraseDelta removes special symbols δ. The transition relation Tmsg is given by the following inference rules: • if w 6= cleanup(w), or i = 1 and w′ 6= eraseOut(cleanup(w)) then (msg1)
msg
hi|wi −−→ hi|w′ i
where w′ = cleanup(w) if i 6= 1, and w′ = eraseOut(cleanup(w)) if i = 1; • if Mj are msg-irreducible, ∀j ∈ J ⊆ [n], then msg
(msg2)
Mk −−→ Mk′ , ∀k ∈ [n]\J msg hi|w; M1 , . . . , Mn i −−→ hi|w′ ; M1′′ , . . . , Mn′′ i
where Ml′′ have the same structure as Ml , but w(Mj′′ ) eraseOut(w(Mj ) + inL(Mj ) (w)),∀j ∈ J, and w(Mk′′ ) ′ ) + in ′ eraseOut(w(MkP L(Mk ) (w)),∀k 6∈ J, and either w )) whenever i 6= 1, or w′ cleanup(w) + l∈[n] out(w(M P l eraseOut(cleanup(w)) + l∈[n] out(w(Ml )) if i = 1 .
= = = =
In order to define the transition relation Tdiss , we first define the notion of diss-irreducibility. Definition 1.16. Any elementary membrane is diss-irreducible. A composite membrane hi|w, M1 , . . . , Mn i is diss-irreducible if w(Mj )(δ) = 0 and Mj is diss-irreducible for all i ∈ [n]. In what follows, M∗ and N∗ range over (possibly empty) sets of membranes. The transition relation Tdiss is given by the following inference rules:
3. REGISTER MEMBRANES FOR RULES WITH PROMOTERS AND INHIBITORS 27
• if Ms = his |ws ; M∗s i are diss-irreducible ∀s ∈ [n], and there exists a non-empty J ⊂ [n] such that wj (δ) = 1, j ∈ J and wk (δ) = 0 for k 6∈ J, then (diss1)
diss
hi|w; M1 , . . . , Mn i −−→ hi|w′ ; M∗ i P where w′ = w + j∈J eraseDelta(wj ) and M∗ = (∪k6∈J {Mk }) ∪ (∪j∈J M∗j ); • if Mj are diss -irreducible for all j ∈ J where J ⊆ [n], then diss
(diss2)
Mk −−→ Mk′ , ∀k 6∈ J diss
hi|w; M1 , . . . , Mn i −−→ hi|w; M1′ , . . . , Mn′ i
where Mj′ = Mj for all j ∈ J.
3.2. Static Allocation Semantics. Let R be a multiset over Rules(i) for some label i. P We denote by lhs(R) the multiset over The multiset Ω given by lhs(R)(a) = r∈Rules(L) R(r) · lhs(r)(a). rhs(R) is defined in the same way. We also define promoter(R)(a) = maxr∈supp(R) promoter(r)(a). Definition 1.17. For a membrane M =< i|w; M∗ > and for R a multiset of rules over Rules(i) we say that the pair (M, R) is valid when • lhs(R) ≤ w, promoter(R) ≤ w • for all r ∈ supp(R), either inhibitor(r) = 0Ω or there exists ar ∈ Ω such that w(ar ) < inhibitor(r)(ar ); • if rhs(R)(a, inj ) > 0 then there exists a membrane with label j in M∗ . We say that the pair (M, R) is maximally valid if it is valid, and for any multiset R′ over Rules(i) such that R ≤ R′ and (M, R′ ) is valid it follows that R = R′ . Note that the multiset R is not required to be non-empty, therefore the pair (M, 0Rules(i) ) is valid, and can even be maximally valid (when no rule from Rules(i) can be applied). We define a transition system over the set of membranes by the following rules: • if (hi|wi , R) is maximally valid then (mpr1)
R
hi|wi − → hi|w − lhs(R) + rhs(R)i
• if (hi|w; M1 , M2 , . . . , Mk i , R) is maximally valid then R
(mpr2)
R
1 1 N1 , . . . , Mk −−→ Nk M1 −−→
R
hi|w; M1 , M2 , . . . , Mk i − → hi|w − lhs(R) + rhs(R); N1 , N2 , . . . , Nk i
28
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS 0
Rules We say that M is mpr-irreducible if M −− −−→ M , and all its children are also mpr-irreducible. R Note that even if M → N , it does not mean that N is mpr-irreducible: since inhibitors can be consumed by the multiset of rules R, it may be the case that there is a non-empty multiset S of rules such that (N, S) is maximally valid. We prove now the equivalence between static and dynamic allocation semantics. In what follows, let w(H) denote the content of a register membrane H, and reg(H) its register, i.e. H = (hi|w(H); H∗ i , reg(H)).
Proposition 1.18. Consider a membrane M which is mpr-reducible. Let HM ∈ φ−1 (M ) denote the register membrane corresponding to M , which R
has its register and all the registers of inner membranes equal to 0Ω . If M → lk l1 N then there exist l1 , . . . , lk ∈ Rules ∪ {τ } such that HM → ... → H, H is mpr-irreducible and card{i ∈ [k]/li = r} = R(r). Moreover, N = φ(H). HM
l1
/ . .SK .
lk
/H φ
φ
M
/N
__ R
Proposition 1.19. If H0 is a register membrane with its register and lk l1 all registers of inner membranes equal to 0Ω and H0 → ... → Hk in Tmpr such that Hk is mpr-irreducible, then there exists a multiset R over the rules R in φ(H0 ) such that φ(H0 ) → φ(Hk ). Moreover, R(r) = card{i ∈ [k]/li = r}. H0
l1
/ ._._.
lk
/ Hk
φ
φ
φ(H0 )
R
/ φ(Hk )
These two propositions show that the static allocation can be considered to be a big-step semantics [55] for the rewriting stage of the evolution of a membrane, while dynamic allocation provides an equivalent small-step semantics [97]. We use static allocation semantics together with the previously defined Tmsg and Tdiss to describe the evolution of a membrane system analogously to the manner in which we used together Tmpr , Tmsg and Tdiss . A Maude implementation of dynamical allocation semantics is presented in [AC2]. 3.3. Bisimulation. Operational semantics provides a formal way to find out which transitions are possible for the current configurations of a P
3. REGISTER MEMBRANES FOR RULES WITH PROMOTERS AND INHIBITORS 29
system. It provides the basis for defining certain equivalences and congruences between P systems. Moreover, operational semantics allows a formal analysis of membrane computing, permitting the study of relations between systems. Important relations include simulation preorders and bisimulation. These are especially useful with respect to P systems, allowing to compare two P systems. A simulation preorder is a relation between two transition systems associated to P systems expressing that the second one can match the transitions of the first one. We present a simulation as a relation over the states in a single transition system rather than between the configurations of two systems. Often a transition system consists intuitively of two or more distinct systems, but we also need our notion of simulation over the same transition system. Therefore our definitions relate configurations within one transition system, and this is easily adapted to relate two separate transition systems by building a single transition system consisting of their disjoint union. Definition 1.20. Let Π be a P system. (1) A simulation relation is a binary relation R over C(Π) such that for every pair of configurations C1 , C2 ∈ C(Π), if (C1 , C2 ) ∈ R, then for all C1′ ∈ C(Π), C1 ⇒ C1′ implies that there is a C2′ ∈ C(Π) such that C2 ⇒ C2′ and (C1′ , C2′ ) ∈ R. (2) Given two configurations C, C ′ ∈ C(Π), C simulates C ′ , written C ′ ≤ C, iff there is a simulation R such that (C, C ′ ) ∈ R. In this case, C and C ′ are said to be similar, and ≤ is called the similarity relation. The similarity relation is a preorder. Furthermore, it is the largest simulation relation over a given transition system. A bisimulation is an equivalence relation between transition systems such that one system simulates the other and vice-versa. Intuitively two systems are bisimilar if they match each other’s transitions, and their evolutions cannot be distinguished. Definition 1.21. Let Π be a P system. (1) A bisimulation relation is a binary relation R over C(Π) such that both R and R−1 are simulation preorders. (2) Given two configurations C, C ′ ∈ C(Π), C is bisimilar to C ′ , written C ∼ C ′ , iff there is a bisimulation R such that (C, C ′ ) ∈ R. In this case, C and C ′ are said to be bisimilar, and ∼ is called the bisimilarity relation. The bisimilarity relation ∼ is an equivalence relation. Furthermore, it is the largest bisimulation relation over a given transition system.
30
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
4. Reversibility in Membrane Computing The problem of finding the membranes N which evolve to a given membrane M in a single step is solved in [AC1] by defining reversibility in membrane computing. Reversibility in computing means that we have a backward evolution. In membrane systems, we start with the following question: given a membrane M , how do we find the membrane(s) N such that N evolves to M in one step? A solution of this problem is given surprisingly by reversing the rules of M , and find N by applying reversely a valid multisets of rules. When considering membrane system with rules which only involve object rewriting (of type a → b, without messages), we find that to reverse a computation it is enough to reverse the rules (a → b becomes b → a), and find a condition equivalent to maximal parallelism. However, when rules of type a → (b, out) or a → (b, inchild ) are allowed, two ways of reversing computation appear. One is to employ a special type of rule reversal and to move the rules between membranes: for example, a → (b, out) associated to the membrane with label i in Π is replaced with b → (a, ini ) associated to the membrane e Another way of defining the reverse P system is with label parent(i) in Π. by reversing all the rules without moving them between membranes (and thus allowing rules of form (b, out) → a); the backwards computation is obtained by moving objects instead of moving rules. The object movement corresponds to reversing the message sending stage of the evolution of a membrane, and the maximally parallel rewriting stage is reversed. This approach allows us to reverse computations even when the P system has general communication rules, of form u → v(w, out)(v1 , ini1 ) . . . (vk , inik ). A membrane system of degree m is a tuple Π = (O, µ, w1 , . . . wm , R1 , . . . , Rm , io ) where: • O is an alphabet of objects; • µ is a membrane structure, with the membranes labelled by natural numbers 1 . . . m, in a one-to-one manner; • wi are multisets over O associated with the regions 1 . . . m defined by µ; • R1 , . . . , Rm are finite sets of rules associated with the membranes 1 . . . m; the rules have the form u → v, where u is a non-empty multiset of objects and v a multiset over messages of the form (a, here), (a, out), (a, inj ); • i0 is either a number between 1 and m specifying the output membrane of Π, or it is equal to 0 indicating that the output is the outer region. We do not use the notion of output membrane so in further mentions of a P system, i0 will be ignored. Since we work with two P systems at once Π for the sets of rules e we use the notation RΠ , . . . , Rm (namely Π and Π), 1 R1 , . . . , Rm of the P system Π.
4. REVERSIBILITY IN MEMBRANE COMPUTING
31
We consider a multiset w over a set S to be a function w : S → N. When writing a multiset w(s) = 1, w(t) = 2 we use its string representation s + 2t, to simplify its description. To each multiset w we associate its support, denoted by supp(w),which contains those elements of S which have a nonzero image. A multiset is called non-empty if it has non-empty support. We denote the empty multiset by 0S . The sum of two multisets w, w′ over S is the multiset w + w′ : S → N, (w + w′ )(s) = w(s) + w′ (s). For two multisets w, w′ over S we say that w is contained in w′ if w(s) ≤ w′ (s), ∀s ∈ S. We denote this by w ≤ w′ . If w ≤ w′ we can define w′ − w by (w′ − w)(s) = w′ (s) − w(s). To work in a uniform manner, we consider all multisets of objects and messages to be over Ω = O ∪ O × {out} ∪ O × {inj | j ∈ {1, . . . , m}} Definition 1.22. The set M(Π) of membranes in a P system Π together with the membrane structure are inductively defined as follows: • if i is a label and w is a multiset over O ∪ O × {out} then hi|wi ∈ M(Π); hi|wi is called an elementary membrane, and its structure is hi; • if i is a label, M1 , . . . , Mn ∈ M(Π), n ≥ 1 have distinct labels i1 , . . . , in , each Mk has structure µk and w is a multiset over O ∪ O × {out} ∪ O × {ini1 , . . . , inin } then hi|w; M1 , . . . , Mn i ∈ M(Π); hi|w; M1 , . . . , Mn i is called a composite membrane, and its structure is hµ1 . . . µn i. We use the notation w(M ) for the multiset of membrane M . Other notations are: l(M ) for the label of M , parent(i) for the label indicating the parent of the membrane labelled by i (if it exists), children(i) for the set of labels indicating the children of the membrane labelled by i (if any). By simple communication rules we understand that all rules inside membranes are of the form u → v where u is a multiset of objects (supp(u) ⊆ O) and v is either a multiset of objects, or a multiset of objects with the message inj (supp(v) ⊆ O × {inj }) or a multiset of objects with the message out (supp(v) ⊆ O × {inj }). Moreover we suppose that the skin membrane does not have any rules involving objects with the message out. We use multisets of rules R : RiΠ → N to describe maximally parallel application of rules. For a rule r = (u → v) we use the notations lhs(r) = u, rhs(r) = v. Similarly, for a multiset R of rules from RiΠ , we define the following multisets over Ω: X X R(r) · rhs(r)(o) R(r) · lhs(r)(o) and rhs(R)(o) = lhs(R)(o) = r1 ∈RiΠ
r1 ∈RiΠ
for each object or message o ∈ Ω. The following definition captures the meaning of “maximally parallel application of rules”: Definition 1.23. We say that a multiset of rules R : RiΠ → N is valid in the membrane M with label i and content w(M ) if lhs(R) ≤ w(M ). The
32
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
multiset R is called maximally valid in M if it is valid and there is no rule r ∈ RiΠ such that lhs(r) ≤ w(M ) − lhs(R). 4.1. P Systems with One Membrane. Suppose that the membrane system Π consists only of the skin membrane, labelled by 1. Since the membrane has no children and we have assumed it has no rules concerning out messages, all its rules are of form u → v, with supp(u), supp(v) ⊆ O. Given the membrane M = h1|wi in the system Π = (O, µ, w1 , R1Π ) we want to find all N = h1|w′ i such that N rewrites to M in a single maximally parallel rewriting step. To do this we define the reverse membrane system e e = (O, µ, w1 , RΠ Π 1 ), with evolution rules given by: e
(u → v) ∈ R1Π if and only if (v → u) ∈ R1Π
f= For each M ∈ M(Π), M = h1|wi we consider the reverse membrane M e h1|wi ∈ M(Π) which has the same elements and label as M . The notation f is used to emphasize that it is a membrane of the system Π. e M
e is appropriate Remark 1.24. Note that using the term of reverse for Π e e = Π. because Π
When we reverse the rules of a membrane system, reversibility of the maximally parallel application of rules requires a different concept than the maximal validity of a multiset of rules. We
Definition 1.25. The multiset R : RiΠ → N is called reversely valid in the membrane M ∈ M(Π) with label i and content w(M ) if it is valid in M and there is no rule r ∈ RiΠ such that rhs(r) ≤ w(M ) − lhs(R). Note that the difference from maximally valid is that here we use the right-hand side of a rule r in rhs(r) ≤ w(M )−lhs(R), instead of the left-hand side. Example 1.26. Let M = h1|b + ci with evolution rules R1Π = {r1 , r2 }, f = h1|b + ci with evolution rules where r1 = a → b, r2 = b → c. Then M e R1Π = {re1 , re2 }, where re1 = b → a, re2 = c → b. The valid multisets of rules e in f are 0 Πe , re1 , re2 and re1 + re2 . The reversely valid multiset of rules R in M R 1
f can be either re1 or re1 + re2 . If R e = re1 then M f rewrites to h1|a + ci; if M e f R = re1 + re2 then M rewrites to h1|a + bi. These are the only two membranes from which we can obtain M in one maximally parallel rewriting step (in Π). This example clarifies why we have to apply multisets of rules which are reversely valid: validity ensures that some objects are consumed by rules re (reversely, they were produced by some rules r) and reverse validity ensures that objects like b (appearing in both the left and right-hand sides of rules) are always consumed by rules re (reversely, they were surely produced by some rules r, otherwise it would contradict maximal parallelism for the multiset R).
4. REVERSIBILITY IN MEMBRANE COMPUTING
33
The operational semantics for both maximally parallel application of rules and inverse maximally parallel application of rules in a P system with one membrane is presented in [AC1]. e the multiset of rules For a multiset R of rules over R1Π we denote by R e e → v) = R(v → u). Then lhs(R) = rhs(R) e and over R1Π for which R(u e rhs(R) = lhs(R). e R R f→ e Proposition 1.27. N →mpr M if and only if M mpr g N.
4.2. P Systems Without Communication Rules. If the P system has more than one membrane but it has no communication rules (i.e. no rules of form u → v, with supp(v) ⊆ O × {out} or supp(v) ⊆ O × {inj }) the method of reversing the computation is similar to that previously described. We describe it again but in a different way, since here we introduce the notion of a (valid) system of multisets of rules for a P system Π. This notion is useful for P systems without communication rules, and is fundamental in reversing the computation of a P system with communication rules. Definition 1.28. A system of multisets of rules for a P system Π of degree m is a tuple R = (R1 , R2 , . . . , Rm ), where each Ri is a multiset over RiΠ , i ∈ {1, . . . , m}. We call descendant of a membrane M each membrane contained in it, i.e. M itself, the children of M , their children, and so on. A system of multisets of rules R is called valid, maximally valid or reversely valid in the skin membrane M if each Ri is valid, maximally valid or reversely valid in membrane Mi , which is the descendant of M with label i, i ∈ {1, . . . , m}. e reverse to the P system Π is defined analogously to the The P system Π e e Π e = (O, µ, w1 , . . . wm , RΠ one in subsection 4.1: Π 1 , . . . , Rm ) where (u → v) ∈ e e e = Π. R1Π if and only if (v → u) ∈ R1Π . Note that Π If R = (R1 , . . . , Rm ) is a system of multisets of rules for a P system Π, e the system of multisets of rules for the reverse P system Π e we denote by R e = (R f1 , . . . , R f2 ). given by R
Example 1.29. Let M = h1|b + c; N i, N = h2|2ai with evolution rules R1Π = {r1 , r2 }, R2Π = {r3 , r4 }, where r1 = a → c, r2 = d → c, r3 = a+b → a, e f = h1|b + c; h2|2aii, with evolution rules RΠ r4 = a → d. Then M 1 = {re1 , re2 }, e R2Π = {re3 , re4 }, where re1 = c → a, re2 = c → d, re3 = a → a + b, re4 = d → a. In order to find all membranes which evolve to M in one step, we look for e = (R f1 , R f2 ) of multisets of rules, which is reversely valid in a system R f. Then R f1 can be either 0 Πe , re1 or re2 and the only the skin membrane M R 1
f2 is 2re3 . We apply R e to the skin membrane M f and we possibility for R e obtain three possible membranes P such that P ⇒ M ; namely, P can be either h1|b + c; h2|2a + 2bii or h1|b + a; h2|2a + 2bii or h1|b + d; h2|2a + 2bii.
34
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
1
b+c r1 : a → c r2 : d → c 2a r3 : a + b → a 2 r4 : a → d
reverse
−−−−−→
1
b+c re1 : c → a re2 : c → d 2a re3 : a → a + b 2 re4 : d → a
An inductive definition of the operational semantics for both maximally parallel application of rules and inverse maximally parallel application of rules in a P system without communication rules are presented in [AC1]. We use R as label to suggest that rule application is done simultaneously in all membranes, and thus to prepare the way toward the general case of P systems with communication rules. Proposition 1.30. If N is the skin membrane for a P system Π, then e R R f→ e N →mpr M if and only if M mpr g N R
Proof. If N →mpr M then R is maximally valid in the skin membrane N , which means that Ri is maximally valid in Ni , the descendant of N with label i, and w(Mi ) = w(Ni ) − lhs(Ri ) + rhs(Ri ), where Mi is the descendant of M with label i. By using the same reasoning as in the proof fi is reversely valid in Mi , for all i ∈ of Proposition 1.27 it follows that R e is reversely valid in the skin membrane M f of the {1, . . . , m}. Therefore R e Moreover, we have w(N fi ) = w(M fi ) − lhs(R fi ) + rhs(R fi ), reverse P system Π. e e R R f →mpr e f g N e , the proof follows as for the implication just so M g N . If M →mpr proved.
4.3. P System with Communication Rules. When the P system has communication rules we no longer can simply reverse the rules and obtain a reverse computation; we also have to move the rules between membranes. When saying that we move the rules we understand that the reverse system can have rules re associated to a membrane with label i while r is associated to a membrane with label j (j is either the parent or the child of i, depending on the form of r). If u is a multiset of objects (supp(u) ⊆ O) we denote by (u, out) the multiset with supp(u, out) ⊆ O × {out} given by (u, out)(a, out) = u(a), for all a ∈ O. More explicitly, (u, out) has only messages of form (a, out), and their number is that of the objects a in u. Given a label j, we define (u, inj ) similarly: supp(u, inj ) ⊆ O × {inj } and (u, inj )(a, inj ) = u(a), for all a ∈ O. e reverse to the P system Π is defined differThe P system Π e = ently from the case of P systems without communication rules: Π e e Π ) such that: (O, µ, w1 , . . . wm , R1Π , . . . , Rm e
(1) re = u → v ∈ RiΠ if and only if r = v → u ∈ RiΠ ; e Π (2) re = u → (v, out) ∈ RiΠ if and only if r = v → (u, ini ) ∈ Rparent(i) ; e
(3) re = u → (v, inj ) ∈ RiΠ if and only if r = v → (u, out) ∈ RjΠ , i = parent(j);
4. REVERSIBILITY IN MEMBRANE COMPUTING
35
where u, v are multisets of objects. Note the difference between rule reversibility when there are no communication rules and the current class of P systems with communication rules. Proposition 1.31. The reverse of the reverse of a P system is the initial P system: e e =Π Π
If R = (R1 , . . . , Rm ) is a system of multisets of rules for a P system Π we e the system also need a different reversibility for it. Namely, we denote by R e e f1 , . . . , R f2 ), of multisets of rules for the reverse P system Π given by R = (R such that: e fi (e r) = Ri (r); • if re = u → v ∈ RiΠ then R e Π fi (e r) = Rparent(i) (r); • if re = u → (v, out) ∈ Ri then R e Π fi (e r) = Rj (r); • if re = u → (v, inj ) ∈ R then R i
Example 1.32. Consider M = h1|d; N i, N = h2|c + e; P i, P = h3|ci in the P system Π with R1Π = {r1 , r2 }, R2Π = {r3 , r4 } and R3Π = {r5 }, where r1 = a → (c, in2 ), r2 = a → c, r3 = e → (c, in3 ), r4 = a → (d, out) and f = h1|d; h2|c + e; h3|ciii in the reverse P system r5 = b → (e, out). Then M e e e e with RΠ = {re2 , re4 }, RΠ = {re1 , re5 }, RΠ Π, 1 1 3 = {re3 }, where re1 = c → (a, out), re2 = c → a, re3 = c → (e, out), re4 = d → (a, in2 ) and re5 = e → (b, in3 ). For a system of multisets of rules R = (r1 + r2 , 2r4 , 3r5 ) in Π the reverse is e = (2re4 + re2 , re1 + 3re5 , 0 Πe ). R R '
1
&
'
$
3
d r1 : a → (c, in2 ) r2 : a → c c+e r3 : e → (c, in3 ) r4 : a → (d, out) 2 c 3 r5 : b → (e, out)
reverse
−−−−−→
%
1
d re2 : c → a re4 : d → (a, in2 ) c+e re1 : c → (a, out) re5 : e → (b, in3 ) 2 c 3 re3 : c → (e, out)
&
$
%
The definitions for validity and maximal validity of a system of multisets of rules are the same before. However, we need to extend the definition of reverse validity to describe situations arising from a rule being moved. Definition 1.33. A system of multisets of rules R = (R1 , . . . , Rn ) for a P system Π is called reversely valid in the skin membrane M (with descendants Mi , for each label i) if: • R is valid in the skin membrane M (i.e. lhs(Ri ) ≤ w(Mi )); • ∀i ∈ {1, . . . , m}, there is no rule r = u → v ∈ RiΠ such that rhs(r) = v ≤ w(Mi ) − lhs(Ri ); • ∀i ∈ {1, . . . , m} such that there exists parent(i), there is no rule Π r = u → (v, ini ) ∈ Rparent(i) such that v ≤ w(Mi ) − lhs(Ri ); • ∀i, j ∈ {1, . . . , m} such that parent(j) = i, there is no rule r = u → (v, out) ∈ RjΠ such that v ≤ w(Mi ) − lhs(Ri ).
36
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
While this definition is more complicated than the previous one presented in subsection 4.2, it can be seen in the proof of Proposition 1.34 that it is exactly what is required to reverse a computation in which a maximally parallel rewriting takes place. e reversely valid in M f. Since Example 1.32 continued. We look for R e f f R must be valid we can have R1 equal to 0RΠe or re4 ; R2 equal to 0RΠe , 1
2
f3 equal to 0 Πe or re3 . According to Definition 1.33, re1 , re5 or re1 + re5 ; R R3 we can look at any possibility for Ri to see if it can be a component of a reversely valid system R. In this example the only problem (with ref2 = 0 Πe or when R f2 = re1 , since spect to reverse validity) appears when R R 2
e e ) − lhs(R f2 ) and rule c → (e, out) ∈ RΠ in both cases we have e ≤ w(N 3. f2 = re1 Let us see why we exclude exactly these two cases. Suppose R f f e f and, for example, R1 = re4 , R3 = re3 . If we apply R, M rewrites to h1|(a, in2 ); h2|(a, out) + e; h3|(e, out)iii; after message sending, we obtain h1|a; h2|a + 2e; h3|0O iii which cannot rewrite to M while respecting maximal parallelism (otherwise there would appear two c’s in the membrane P f2 = 0 Πe . with label 3). The same thing would happen when R R2 In P systems with communication rules we work with both rewriting and message sending. We have presented two semantics for rewriting: mpr (maximally parallel rewriting) and mpr g (inverse maximally parallel rewriting). They are also used here, with the remark that the notion of reversely valid system has been extended. The message sending stage consists of erasing messages from the multiset in each Mi , adding to each such multiset objects a corresponding to messages (a, ini ) in the parent membrane Mparent(i) and furthermore, adding objects a corresponding to messages (a, out) in the children membranes Mj for all j ∈ children(i). We call a membrane M message-free if for any descendant N we have w(N ) multiset of objects.
Proposition 1.34. If the skin membrane M is message-free then e R R e→ f M →mpr →msg N implies N mpr g →msg M
e is message-free then If the skin membrane N
e R R e→ f N mpr g →msg M implies M →mpr →msg N
4.4. Moving Objects Instead of Moving Rules. Another way to R reverse a computation N →mpr →msg M is to move objects instead of moving rules. We start by reversing all rules of the P system Π; since these rules can be communication rules,what is obtained by their reversal is not, technically, a P system. For example, a rule r = a → (b, out) yields re = (b, out) → a, whose left-hand side contains the message out and therefore is not a rule. However, we can consider that in the reverse P system rules are allowed
4. REVERSIBILITY IN MEMBRANE COMPUTING
37
to also have messages in their left-hand side. This approach allows us to reverse computations even when the P system has general communication rules, of form u → v(w, out)(v1 , ini1 ) . . . (vk , inik ). e to the P system Π is defined differently: Π e = The reverse Π e e Π ) such that: (O, µ, w1 , . . . wm , R1Π , . . . , Rm e
re = u → v ∈ RiΠ if and only if r = v → u ∈ RiΠ
where v is a multiset of objects with messages and u a multiset of objects. Given a system of multisets of rules R = (R1 , . . . , Rm ) for the P system e = (R f1 , . . . , R g Π we define the system of multisets of rules R m ) for the reverse e by R fi (e Π r) = Ri (r). The notions valid,maximally valid, reversely valid and e R
R
the definitions of →mpr , →mpr g , →msg are those from subsection 4.3. We move objects present in the membranes contained by the skin and transform them from objects to messages according to the rules of the membrane system. The aim is to achieve a result of form R e Re g M f M →mpr N →msg P if and only if Pe →msg g N →mpr
However, to reverse the message sending part of the evolution of the P e which “calls” obsystem we need to have a system of multisets of rules R e jects, and moreover that system has to be exactly the R which is applied in e R e→ f N mpr g M . By “call” we understand that some objects are moved from a membrane to another where they are transformed into messages, under the influence of some rules. e in the reverse Π e Definition 1.35. For a system of multisets of rules R fi ), lhsinj (R fi ) by: we define the multisets of objects lhsout (R X X fi )(a) = fi ) = lhsout (R Ri (r) · w(a) lhsinj (R Ri (r) · v ′ (a) e
e
re∈RΠi )
re∈RΠi )
where re = v(w, out)(v1 , ini1 ) . . . (vk , inik ) → u and v ′ = vl if il = j and v ′ is the empty multiset over Ω if j 6∈ {i1 , . . . , ik }. e R
We define the transition relation →mpr g on skin membranes in the re-
e R e by M f→ e verse Π mpr g N if and only if
f) ≥ lhsini (R e parent(i) ) + w(M
X
j∈children(i)
e parent(i) ) − e ) = w(M f) − lhsini (R w(N fi ), out) + (lhsout (R
X
j∈children(i)
fj ) and lhsout (R
X
j∈children(i)
fj )+ lhsout (R
fi ), inj ) (lhsinj (R
An example illustrating the movement of the objects is the following:
38
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
'
$
a r1 : a → (c, in2 ) r2 : a → c a+e →mpr r3 : e → (e, out)(c, in3 ) r4 : a → (d, out) 2 b 3 r5 : b → (e, out)
1
& '
1
⇑mpr+reverse g
&
→msg 1
%
(c, in2 ) re1 : (c, in2 ) → a re2 : c → a (d, out) + (c, in3 ) + (e, out) re3 : (e, out)(c, in3 ) → e re4 : (d, out) → a 2 (e, out) 3 re5 : (e, out) → b
$
$
'
d+e r1 : a → (c, in2 ) r2 : a → c c+e r3 : e → (e, out)(c, in3 ) r4 : a → (d, out) 2 c 3 r5 : b → (e, out)
&
'
⇓reverse
(e r1 ,r e3 +e r4 ,r f 5)
−−−−−−−−− −1 msg g←
%
%
$
d+e re1 : (c, in2 ) → a re2 : c → a c+e re3 : (e, out)(c, in3 ) → e re4 : (d, out) → a 2 c 3 re : (e, out) → b 5
&
%
where the “reverse” movement →msg g of objects between membranes is: called by rule r˜4
• d in membrane 1 −−−−−−−−−−→ (d, out) in membrane 2; called by rule r˜1
• c in membrane 2 −−−−−−−−−−→ (c, in2 ) in membrane 1; called by rule r˜5
• e in membrane 2 −−−−−−−−−−→ (e, out) in membrane 3; called by rule r˜3
• c in membrane 3, e in membrane 1 −−−−−−−−−−→ (c, in3 ) + (e, out) in membrane 2. By applying the reverse rules, messages are consumed and turned into objects, thus performing a reversed computation to the initial membrane. Proposition 1.36. If M is a message-free skin membrane in the P system Π then e R R e Re g M f M →mpr N →msg P implies Pe →msg g N →mpr
e then If Pe is a message-free skin membrane in the reverse Π
e R R e Re g M f implies M → Pe →msg mpr N →msg P g N →mpr
5. Minimal Parallelism
A debated topic in membrane computing is how “realistic” various classes of P systems, hence various ingredients used in them, are from a biological point of view. Starting from the observation that there is an obvious parallelism in the cell biochemistry, and relying on the assumption that “if we wait enough, then all reactions which may take place will take place”, a basic feature of the P systems as introduced in [89] is the maximally parallel way of using the rules (in each step, in each region of a system, we have to use a maximal multiset of rules). This condition provides a useful tool
5. MINIMAL PARALLELISM
39
in proving various properties, because it decreases the nondeterminism of the systems evolution, and it supports techniques for simulating the powerful features of appearance checking and zero testing, standard methods used to get computational completeness/universality of regulated rewriting in formal language theory and of register machines, respectively. In [CP1] we propose a rather natural condition: from each set of rules where at least a rule can be used, then at least one is actually used (maybe more, without any restriction). We say that this is a minimal parallelism, because this mode of using the rules ensures that all compartments of the system evolve in parallel, by using at least one rule whenever such a rule is applicable. We have two main cases given by the fact that the rules can be associated with a region of a P system (as in P systems with symbolobjects processed by rewriting-like rules), or they can be associated with membranes (as in symport/antiport P systems and in P systems with active membranes). The minimal parallelism refers to the fact that each such set is considered separately, and, if at least one rule of a set can be applied, then at least one rule must be applied. For systems with only one membrane the minimal parallelism is nothing else than non-synchronization, hence the non-trivial case is that of multi-membrane systems. We consider only two cases, that of P systems with symport/antiport rules, and of P systems with active membranes. Somewhat surprisingly, the universality is obtained again, in both cases. For instance, for symport/antiport systems, we again need a small number of membranes (three in the generative case, and two in the accepting case), while the symport and antiport rules are rather simple (of weight two). The universality is obtained both for systems working in the generative mode in which one collects the results of all halting computations, defined as the multiplicity of objects from an output membrane in the halting configuration, as well as in the accepting mode where a number is introduced in the system, in the form of the multiplicity of a specified object in a specified membrane, and the number is accepted if the computation halts. Moreover, the system can be deterministic in the accepting case. Similarly, P systems with active membranes are universal even when using rules of only the types (a), (b), (c) (local evolution of objects, send-in, and send-out rules, respectively). These systems can also solve NP-complete problems in polynomial time – we exemplify this possibility by using the Boolean satisfiability problem (SAT). The results are as usual in the case of maximal parallelism, with the mentioning that we pay here a price for them: we use three polarizations in the universality proof, and division of non-elementary membranes in the efficiency proof; the construction for solving SAT is semi-uniform, and it works in a confluent way. The minimal parallelism for other classes of P systems, especially for catalytic systems, remains to be investigated. We introduce the minimal parallelism for symport/antiport P systems, also giving their universality, both in the generative and accepting cases.
40
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
Then we consider the minimal parallelism for P systems with active membranes, which are proved to be both universal and computationally efficient. More details are in [CP1]. A register machine is a construct M = (n, B, l0 , lh , R), where n is the number of registers, B is the set of instruction labels, l0 is the start label, lh is the halt label (assigned to HALT only), and R is the set of instructions. Each label of B labels only one instruction from R, thus precisely identifying it. A register machine M generates a set N (M ) of numbers in the following way: having initially all registers empty (hence storing the number zero), start with the instruction labelled by l0 , and proceed to apply instructions as indicated by the labels and by the contents of registers. If we reach the halt instruction, then the number stored at that time in register 1 is said to be computed by M , and hence it is introduced in N (M ). Since we have a non-deterministic choice in the continuation of the computation in the case of ADD instructions, N (M ) can be an infinite set. It is known (see [77]) that in this way we can compute all the sets of numbers which are Turing computable, even using register machines with only three registers. Without loss of generality, we may assume that we have lj 6= li , lk 6= li whenever we use an instruction li : (ADD(r), lj , lk ), and the unique instruction labelled by l0 is an ADD instruction. Moreover, we may assume that the registers of the machine except register 1 are empty in the halting configuration. A register machine can also work in an accepting mode. The number to be accepted is introduced in register 1, with all other registers empty. We start computing with the instruction labelled by l0 ; if the computation halts, then the number is accepted (the contents of the registers in the halting configuration do not matter). We still denote by N (M ) the set of numbers accepted by a register machine M . In the accepting case, we can also consider deterministic register machines, where all the instructions li : (ADD, lj , lk ) have lj = lk . It is known that deterministic accepting register machines characterize N RE. 5.1. Minimal Parallelism for Symport/Antiport P Systems. A P system with symport/antiport rules (of degree n ≥ 1) is a construct of the form Π = (O, µ, w1 , . . . , wn , E, R1 , . . . , Rn , io ), where (1) O is the alphabet of objects, (2) µ is the membrane structure (of degree n ≥ 1, with the membranes labelled in a one-to-one manner with 1, 2, . . . , n), (3) w1 , . . . , wn are strings over O representing the multisets of objects present in the n compartments of µ in the initial configuration of the system, (4) E ⊆ O is the set of objects supposed to appear in the environment in arbitrarily many copies, (5) R1 , . . . , Rn are the (finite) sets of rules associated with the n membranes of µ,
5. MINIMAL PARALLELISM
41
(6) io ∈ H is the label of a membrane of µ which indicates the output region of the system. The rules of any set Ri can be of two types: symport rules of the forms (x, in), (x, out), and antiport rules of the form (u, out; v, in), where x, u, v are strings over O. The length of x, respectively the maximum length of u, v, is called the weight of the corresponding (symport or antiport) rule. In what follows, the rules are applied in a non-deterministic minimally parallel manner: in each step, from each set Ri (1 ≤ i ≤ n) we use at least one rule (without specifying how many) provided that this is possible for a chosen selection of rules. The rules to be used, as well as the objects to which they are applied, are non-deterministically chosen. More specifically, we assign non-deterministically objects to rules, starting with one rule from each set Ri . If we cannot enable any rule for a certain set Ri , then this set remains idle. After having one rule enabled in each set Ri for which this is possible, if we still have objects which can evolve, then we evolve them by any number of rules (possibly none). We emphasize an important aspect: the competition for objects. It is possible that rules from two sets Ri , Rj associated with adjacent membranes i, j (that is, membranes which have access to a common region, either horizontally or vertically in the membrane structure) use the same objects, so that only rules from one of these sets can be used. Such conflicts are resolved in the following way: if possible, objects are assigned non-deterministically to a rule from one set, say Ri ; then, if possible, other objects are assigned non-deterministically to a rule from Rj , and thus fulfilling the condition of minimal parallelism. After that, further rules from Ri or Rj can be used for the remaining objects, non-deterministically assigning objects to rules. If no rule from the other set (Rj , respectively Ri ) can be used after assigning objects to a rule from Ri or Rj , then this is a correct choice of rules. It depends on the first assignment of objects to rules in order to make applicable rules from each set. An example can illuminate this aspect: let us consider the configuration [ k [ i ] i aab [ j ] j ] k , with Ri = {(b, in)}, and Rj = {(a, in), (b, in)}. The objects aab are available to rules from both sets. All the following five assignments are correct: b goes to membrane i by means of the rule (b, in) ∈ Ri , and one or two copies of a is/are moved to membrane j by means of the rule (a, in) ∈ Rj ; b goes to membrane j by means of the rule (b, in) ∈ Rj , no rule from Ri can be used, and zero, one, or two copies of a are also moved to membrane j by means of the rule (a, in) ∈ Rj . Moving only b to membrane i and not using at least once the rule (a, in) ∈ Rj is not correct. As usual in membrane computing, we consider successful only computations which halt, and with a halting computation we associate a result given by the number of objects present in the halting configuration in region io
42
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
(note that this region is not necessarily an elementary one). External output or a squeezing through a terminal set of objects are also possible, but we do not consider these cases here. The set of numbers generated by a system Π in the way described above is denoted by Ngen (Π). The family of all sets Ngen (Π) generated by systems with at most n ≥ 1 membranes, using symport rules of weight at most p ≥ 0, and antiport rules of weight at gen most q ≥ 0 is denoted by Nmin OPn (symp , antiq ), with the subscript “min” indicating the “minimal parallelism” used in computations. We can also use a P system as above in the accepting mode: For a specified object a, we add am to the multiset of region io , and then we start working; if the computation stops, then the number m is accepted. We denote by Nacc (Π) the set of numbers accepted by a system Π, and by acc OP (sym , anti ) the family of all sets of numbers N Nmin n p q acc (Π) accepted by systems with at most n membranes, using symport rules of weight at most p, and antiport rules of weight at most q. In the accepting case we can consider deterministic systems, where at most one continuation is possible in each step. In our context, this means that zero or exactly one rule can be used from each set Ri , 1 ≤ i ≤ n. acc OP (sym , anti ) for denoting the We add the letter D in front of Nmin n p q acc OP (sym , anti ). corresponding families, DNmin n p q Our expectation of introducing the minimal parallelism was to obtain a non-universal class of P systems, possibly with decidable properties. This turns out not to be true: the minimal parallelism still contains the possibility to enforce the checking for zero from SUB instructions of the register machines (this is the same with appearance checking from regulated rewriting), both for symport/antiport systems and for systems with active membranes. gen Theorem 1.37. Nmin OPn (symp , antiq ) = N RE for all n ≥ 3, p ≥ 2, q ≥ 2. gen Proof. It is sufficient to prove the inclusion N RE ⊆ Nmin OP3 (sym2 , gen gen ′ anti2 ); the inclusions Nmin OPn (symp , antiq ) ⊆ Nmin OPn (symp′ , antiq′ ) ⊆ N RE, for all 1 ≤ n ≤ n′ , 0 ≤ p ≤ p′ , and 0 ≤ q ≤ q ′ , are obvious. Let us consider a register machine M = (3, B, l0 , lh , R); the fact that three registers suffice is not relevant in this proof. We consider an object g(l) for each label l ∈ B, and denote by g(B) the set of these objects. We define B ′ = {l′ | l ∈ B}, B ′′ = {l′′ | l ∈ B}, and B ′′′ = {l′′′ | l ∈ B}, where l′ , l′′ , and l′′′ are new symbols associated with l ∈ B. For an arbitrary set of objects Q, let us denote by w(Q) a string which contains exactly once each object from Q; thus w(Q) represents the multiset which consists of exactly one occurrence of each object of Q. In general, for any alphabet Q we use the derived sets Q′ = {b′ | b ∈ Q}, Q′′ = {b′′ | b ∈ Q}, and Q′′′ = {b′′′ | b ∈ Q}, where b′ , b′′ , b′′′ are new symbols associated with b ∈ Q. We construct a P system (of degree 3)
Π = (O, µ, w1 , w2 , w3 , E, R1 , R2 , R3 , 3),
5. MINIMAL PARALLELISM
43
with O = {l, l′ , l′′ , l′′′ , g(l) | l ∈ B} ∪ {ar | 1 ≤ r ≤ 3} ∪ {c, d, t}, µ = [ 1 [ 2 [ 3 ] 3 ] 2 ] 1, w1 w2 w3 E
= = = =
w(B − {l0 })w(B ′ )w(B ′′ )ct, w(g(B)), λ, {ar | 1 ≤ r ≤ 3} ∪ B ′′′ ∪ g(B) ∪ {c, d, l0 },
and with the following sets of rules1: (1) Starting the computation:
R1 (c, out; car , in), 1 ≤ r ≤ 3 (c, out; dl0 , in) –
R2 – – (dl0 , in)
R3 – – –
As long as c is present in region 1, we can bring copies of objects ar (1 ≤ r ≤ 3) in the system by using rule (c, out; car , in) ∈ R1 . The number of copies of ar from region 2 represents the number stored in register r of the register machine. At any moment we can use a rule (c, out; dl0 , in) which brings the initial label of M inside (note that l0 does not appear in w1 ), together with the additional object d. From now on, no further object ar can be brought in the system. In the next step, the objects dl0 can enter region 2 by means of rule (dl0 , in) ∈ R2 , and this initiates the simulation of a computation in our register machine M . (2) We also have the following additional rules which are used in interaction with rules from the other groups, including the previous group: R1 R2 R3 (l0 , out) (g(l), out; dt, in), l ∈ B – – (t, out), (t, in) The role of these rules is to prevent “wrong” steps in Π. For instance, as soon as the instruction with label l0 is simulated (see details below), label l0 should be sent out of the system in order to avoid another computation in M , and this is done by means of rule (l0 , out) ∈ R1 . This rule should be used as soon as l0 is available in region 1, and no other rule from R1 may be applied – we will 1We present the rules in groups associated with various tasks during the computation,
also indicating the steps when they are used.
44
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
see immediately that this is ensured by the rules simulating the instructions of M . However, l0 should not be eliminated prematurely, i.e., before simulating the instruction with label l0 . In such a case, a rule (g(l), out; dt, in) for some l ∈ B should be used once and only once, because we have only one available copy of d. In this way, the object t enters membrane 2, and the computation can never stop because of the rules (t, out), (t, in) ∈ R3 . These trap-rules are used also in order to control the correct simulation of instructions of M . (3) Simulating an instruction li : (ADD(r), lj , lk ): Step R1 R2 R3 1 – (li , out; lp ar , in), p = j, k, or (li , out; t, in) – When li is in region 2, the number of copies of ar is incremented by bringing ar from the skin region, together with any of labels lj and lk . Since we have provided in w2 one copy of any label different from l0 , labels lj and lk are present in region 1. We have assumed that each ADD instruction of M has lj 6= li and lk 6= li , hence we do not need two copies of any l ∈ B in Π. If we have not enough copies of ar because the initial phase was concluded prematurely (we have brought some copies of ar in the system, but all of them were already moved in membrane 2), then we cannot use a rule (li , out; lp ar , in), p = j, k. Because of the minimal parallelism, we should use the rule (li , out; t, in) (no other rule of R2 is applicable). Therefore, the computation will never end, because of the traprules (t, out), (t, in) ∈ R3 . Note that no rule from R1 can be used, excepting (l0 , out), whenever l0 is present in the skin region (which means that the initial instruction was already simulated). (4) Simulating an instruction li : (SUB(r), lj , lk ): Step 1 2 3 4 1 –
R1 – (g(li ), out; g(li )li′′′ , in) – – (li′′′ , out) –
R2 (g(li )li , out; li′ , in) (li′ ar , out; li′′ , in) (g(li )li′′′ , in) (li′′′ li′ , out; lk , in), (li′′′ li′′ , out; lj , in) ...... (d, out; li′′′ t, in)
R3 – – – – – –
With li present in region 2, we use a rule (g(li )li , out; li′ , in) ∈ R2 by which the witness object g(li ) goes to the skin region, and the object li′ is brought into region 2. During the next step, this latter object checks whether there is a copy of ar present in region 2. In a positive case, rule (li′ ar , out; li′′ , in) ∈ R2 is used in parallel with (g(li ), out; g(li )li′′′ , in) ∈ R1 . This last rule brings into the system a “checking” symbol li′′′ .
5. MINIMAL PARALLELISM
45
In the next step we have two possibilities. If we use a rule (g(li )li′′′ , in) ∈ R2 , then both g(li ) and li′′′ are introduced in region 2. Depending on what li′′′ finds here, one can use either (li′′′ li′ , out; lk , in) if the subtraction was not possible (because no ar was present), or (li′′′ li′′ , out; lj , in) if the subtraction was possible. Thus, the correct label selected between lj and lk is brought into region 2. If in step 3, instead of a rule (g(li )li′′′ , in) ∈ R2 , we use again (g(li ), out; g(li )li′′′ , in) ∈ R1 , then the existing object li′′′ either exits the system by means of the rule (li′′′ , out) ∈ R1 and so we repeat the configuration, or we use the rule (d, out; tli′′′ , in) ∈ R2 and the computation never stops (no other rule of R2 can be used). Similarly, if in step 3 we use the rule (li′′′ , out) ∈ R1 , then at the same time the object g(li ) from region 1 either remains unchanged, or uses the rule (g(li ), out; g(li )li′′′ , in) ∈ R1 . In the former case, in the next step we have to use the rule (g(li ), out; g(li )li′′′ , in) ∈ R1 (no other rule of R1 can be applied). In both cases, the configuration is repeated. Of course, if we directly use the rule (d, out; tli′′′ , in) ∈ R2 , then the computation never stops. Thus, in all these cases, either the simulation of the SUB instruction is correct or the system never stops. In the next step, namely step 1 of the simulation of any type of instruction, the object li′′′ exits the system by means of the rule (li′′′ , out) ∈ R1 . Note that no other rule of R1 is used in this step, neither for ADD, nor for SUB instructions. Consequently, both ADD and SUB instructions of M are correctly simulated. The configuration of the system is restored after each simulation, namely all the objects of g(B) are in region 1, and no object of B ′′′ is present in the system. The simulation of the instructions of M can be iterated, and thus we can simulate in Π a computation of M . If the computation of M does not stop, then the corresponding simulation in Π does not stop as well. When the computation in M stops, then the label lh is introduced in region 2, and registers 2 and 3 are empty. Then we use the following group of rules. (5) Final rules: R1 – –
R2 – –
R3 (lh a1 , in) (lh , out)
The halting label lh carries all the copies of a1 from region 2 to region 3, and only when this process is completed the computation can stop.
46
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
Consequently, the system Π correctly simulates (all and nothing else than) the computations of M . This means that Ngen (Π) = N (M ). Note that in this proof the output region is an elementary one, and that the minimal parallelism is essentially used in ensuring the correct simulation of the computations in the register machine M (for instance, the minimal parallelism prevents halting the computation in Π without completing the simulation of a halting computation in M ). The previous proof can be easily adapted for the accepting case: we consider l0 already in region 1, and introduce additionally am 1 , for m being the number we want to accept. Omitting the technical details, we mention the following result: acc OP (sym , anti ) = N RE for all n ≥ 3, p ≥ Corollary 1.38. Nmin n p q 2, q ≥ 2.
Moreover, this last result can be strengthened by considering deterministic P systems, and also by decreasing with one the number of membranes. acc OP (sym , anti ) = N RE for all n ≥ 2, p ≥ Theorem 1.39. DNmin n p q 2, q ≥ 2.
Proof. Let us consider a deterministic register machine M = (3, B, l0 , lh , R) accepting an arbitrary set N (M ) ∈ N RE. We again use the notation w(Q) as specified in the proof of Theorem 1.37, and Qu for the sets {bu | b ∈ Q}, where u denotes various markings (priming). We construct a P system (of degree 2) Π = (O, [ 1 [ 2 ] 2 ] 1 , w1 , w2 , E, R1 , R2 , 1), with O w1 w2 E
= = = =
{l, l′ , l′′ , l′′′ , liv , lv | l ∈ B} ∪ {ar | 1 ≤ r ≤ 3} ∪ {c}, l0 , w(B iv ), O,
and with the following sets of rules: (1) Simulating an instruction li : (ADD(r), lj , lj ): Step R1 R2 1 (li , out; lj ar , in) – Since the environment contains the necessary objects, the ADD instruction is simulated whenever we have the object li in region 1. (2) Simulating an instruction li : (SUB(r), lj , lk ):
5. MINIMAL PARALLELISM
Step 1 2 3 4 1
R1 (li , out; li′ li′′ , in) (li′ ar , out; li′′′ , in) (liiv li′ , out; liiv lkv , in) or (liiv li′′ , out; liiv ljv , in) (lkv , out; clk , in), resp. (ljv , out; clj , in) ......
47
R2 – (liiv , out; li′′ , in) – – (cliiv , in)
With li present in region 1, we bring into the system the objects li′ , li′′ from the inexhaustible environment. The object li′ is used to subtract 1 from register r by leaving together with a copy of ar . This operation brings into the system an object li′′′ ; simultaneously, li′′ enters region 2, releasing the “checking” symbol liiv . In the next step, depending on what liiv finds in membrane 1, either (liiv li′ , out; liiv lkv , in) is used whenever the subtraction was not possible, or (liiv li′′ , out; liiv ljv , in) is used if the subtraction was possible. The corresponding ljv or lkv is brought into the system. In the fourth step, these last objects exit the system and bring inside their corresponding non-marked objects lj and lk , together with an auxiliary object c. In the next step, object c helps object liiv (which waits one step in the skin region) to return to region 2, and so making sure that the multiset of region 2 contains again all objects of B iv . This step is the first one in simulating another ADD or SUB instruction, and it is important to note that in this step no other rule from R2 can be used. The computation starts by introducing in region 1 a multiset am 1 encoding the number m. Note that the initial label of M is already present here. By iterating the simulations of ADD and SUB instructions, we can simulate any computation of M . Obviously, since any computation of M is deterministic, the computation in Π is also deterministic (from each set Ri , in each step, at most one rule is applicable). When the halt instruction is reached, hence lh appears in the system, the computation stops (with a further last step when the rule (cliiv , in) ∈ R2 is used). We conclude that Nacc (Π) = N (M ). Note that the deterministic construction is simpler than that from the proof of Theorem 1.37, because we should not initially bring “resources” into the system, and we have nothing to do in the final stage (in particular, we should not “clean” the output region of any other objects than those we need to count). 5.2. Minimal Parallelism for P Systems with Active Membranes. A P system with active membranes of the initial degree n ≥ 1 is a construct of the form Π = (O, H, µ, w1 , . . . , wn , R, ho ), where:
48
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
(1) O is the alphabet of objects; (2) H is a finite set of labels for membranes; (3) µ is a membrane structure, consisting of n membranes having initially neutral polarizations, labelled (not necessarily in a one-to-one manner) with elements of H; (4) w1 , . . . , wn are strings over O, describing the multisets of objects placed in the n initial regions of µ; (5) R is a finite set of developmental rules, of the following forms: (a) [ h a → v] eh , for h ∈ H, e ∈ {+, −, 0}, a ∈ O, v ∈ O∗ (object evolution rules, associated with membranes and depending on the label and the charge of the membranes, but not directly involving the membranes, in the sense that the membranes are neither taking part in the application of these rules nor are they modified by them); (b) a[ h ] eh1 → [ h b] eh2 , for h ∈ H, e1 , e2 ∈ {+, −, 0}, a, b ∈ O (in communication rules; a possibly modified object is introduced in a membrane; the polarization of the membrane can also be modified, but not its label); (c) [ h a ] eh1 → [ h ] eh2 b, for h ∈ H, e1 , e2 ∈ {+, −, 0}, a, b ∈ O (out communication rules; a possibly modified object is sent out of the membrane; the polarization of the membrane can also be modified, but not its label); (d) [ h a ] eh → b, for h ∈ H, e ∈ {+, −, 0}, a, b ∈ O (dissolving rules; in reaction with an object, a membrane can be dissolved, while the object specified in the rule can be modified); (e) [ h a ] eh1 → [ h b ] eh2 [ h c ] eh3 , for h ∈ H, e1 , e2 , e3 ∈ {+, −, 0}, a, b, c ∈ O (division rules for elementary or non-elementary membranes; in reaction with an object, the membrane with label h is divided into two membranes with the same label, possibly of different polarizations; the object specified in the rule is replaced in the two new membranes by possibly new objects; the remaining objects may evolve in the same step by rules of type (a) and the result of this evolution is duplicated in the two new membranes; if membrane h contains other membranes, then they may evolve at the same time by rules of any type and the result of their evolution is duplicated in the two new membranes); (6) ho ∈ H is the label of the output membrane.
5. MINIMAL PARALLELISM
49
Note that, different from [90], the rules of type (e) can be applied also to non-elementary membranes. The rules for dividing non-elementary membranes are of a different form in the literature (in several papers the division takes place only when internal membranes of opposite polarizations are present); here we consider the above uniform form of rules for dividing membranes irrespective whether they are elementary or not. We use here the rules in a minimally parallel manner, in the following sense. All the rules of any type involving a membrane h form the set Rh (this means all the rules of type (a) of the form [ h a → v] eh , all the rules of type (b) of the form a[ h ] eh1 → [ h b] eh2 , and all the rules of types (c) – (e) of the form [ h a] eh → z, with the same h, constitute the set Rh ). Moreover, if a membrane h appears several times in a given configuration of the system, then for each occurrence of the membrane we consider a different set Rh ; this means that we identify the ith copy of membrane h with the pair (h, i), and we consider the set of rules Rh,i = Rh . Then, in each step, from each set Rh,i , h ∈ H (possibly, without specifying i = 1 in the case where membrane h appears only once) from which at least a rule can be used, at least one rule must be used. In what follows, in order to make visible the sets of rules, we write explicitly the sets Rh , h ∈ H, instead of the global set R. Of course, as usual for P systems with active membranes, each membrane and each object can be involved in only one rule, and the choice of rules to use and of objects and membranes to evolve is done in a non-deterministic way. We should note that for rules of type (a) the membrane is not considered to be involved: when applying [ h a → v] h , the object a cannot be used by other rules, but the membrane h can be used by any number of rules of type (a) as well as by one rule of types (b) – (e). In each step, the use of rules is done in the bottom-up manner (first the inner objects and membranes evolve, and the result is duplicated if any surrounding membrane is divided). A halting computation provides a result given by the number of objects present in membrane ho at the end of the computation; for a computation to be successful, exactly one membrane with label ho should be present in the halting configuration. The set of numbers generated in this way by a system Π is denoted by N (Π), and the family of these sets, generated by systems having initially at most n1 membranes and using configurations with at most n2 membranes during the computation is denoted by Nmin OPn1 ,n2 ((a), (b), (c), (d), (e)); when a type of rules is not used, it is not mentioned in the notation. We prove now the universality of P systems with active membranes working in the minimally parallel mode; as it is the case also for the maximally parallel mode, only rules of the first three types are used; moreover, we use three polarizations. Actually, in the maximally parallel case the universality of P systems with active membranes can be obtained even without using polarizations (see [3]) at the price of using rules of all the five types.
50
4.
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
Theorem 1.40. Nmin OPn1 ,n2 ((a), (b), (c)) = N RE for all n1 ≥ 4, n2 ≥
Proof. Let us consider a register machine M =(3, B, l0 , lh , R) generating an arbitrary set N (M ) ∈ N RE. We construct a P system (of the initial degree 4) Π = (O, H, µ, l0 , λ, λ, λ, R0 , R1 , R2 , R3 , 1), with O = {a} ∪ {l, l′ , l′′ , l′′′ , liv , lv , lvi , lvii , lviii , lix | l ∈ B}, H = {0, 1, 2, 3}, µ = [ 0[ 1 ] 1 [ 2 ] 2 [ 3 ] 3] 0, and with the following rules2: (1) Simulating an instruction li : (ADD(r), lj , lk ), for r = 1, 2, or 3: Step 1 2 3 4
R0 – – – [ 0 lp′ → lp ] 00 , p = j, k
Rr li [ r ] 0r → [ r li ] 0r [ r li → lp′ a] 0r , p = j, k [ r lp′ ] 0r → [ r ] 0r lp′ , p = j, k –
The label-object li enters into the correct membrane r, produces one further copy of a, and the primed labels lj , lk inside membrane r; the labels lj , lk exit to the skin region and lose their primes, and the process can be iterated. Note that in each step, at most one rule from each set R0 , R1 , R2 , R3 can be used. (2) Simulating an instruction li : (SUB(r), lj , lk ), for r = 1, 2, 3 (we mention the rules to be applied in each step, if their application is possible): Step 1 2 3 4 5 6 7 8 9
R0 [ 0 li → li′ li′′ ] 00 [ 0 li′′ → li′′′ ] 00 [ 0 li′′′ → liiv ] 00 [ 0 liiv → liv ] 00 [ 0 liv → livi ] 00 [ 0 ljviii → ljix ] 00 [ 0 ljix → lj ] 01 – [ 0 livii → lk ] 00
Rr – li′ [ r ] 0r → [ r li′ ] + r − [ r a] + r → [r ]r a [ r li′ → ljviii ] − r 0 viii [ r ljviii ] − r → [ r ] r lj vi + livi [ r ] 0r → [ r livi ] 0r , livi [ r ] + r → [ r li ] r 0 + [ r livi → λ] r , [ r livi → livii ] r 0 vii [ r livii ] + r → [ r ] r li [ r li′ → λ] 0r
2A membrane with label r is defined for each register r; the number stored in a register
r is represented by the number of objects a present in membrane r.
5. MINIMAL PARALLELISM
51
We start by introducing li′ , li′′ in the skin region; the former object enters membrane r associated with the rule to simulate, changing its polarization to +, the latter object evolves in the skin region to li′′′ . In the third step, while li′′′ changes to liiv in the skin region, a copy of object a can exit membrane r, providing that such an object exists; this changes the polarization of this membrane to negative. Let us consider separately the two possible cases. If we have inside membrane r the object li′ , and the membrane is positively charged (which means that the contents of register r was empty), then in steps 4 and 5 we can apply rules only in the skin region, adding primes to the primed li present here until reaching livi . In step 6, this object enters membrane r, evolves (by step 7) to ljvii which exits membrane 2 in step 8, returning its polarization to neutral. In these circumstances, in step 9 livii introduces lk in the skin region (this is the correct label-object for this case), and simultaneously li′ (waiting in membrane r from the beginning of the simulation) is removed by using the rule [ r li′ → λ] 0r from Rr (li′ cannot be removed earlier, because the polarization of membrane r was positive). If in step 3 the inner membrane gets a negative charge (which means that the subtraction from register r was possible), then we can continue with the rules mentioned in the table above for step 4: li′ introduces ljviii , which later will introduce lj . In parallel, in the skin region we use the rule [ 0 liiv → liv ] 00 . The next step is similar: we add one more prime to the object 0 viii of Rr is used, from the skin region, and the rule [ r ljviii ] − r → [ r ] r lj thus restoring the polarization of membrane r to neutral. In step 6, the object ljviii , just expelled from membrane r, introduces ljix in the skin region, while the “witness” livi enters into membrane r (now, neutral again). In step 7, ljix introduces the correct labelobject lj in the skin region, and simultaneously livi is removed from membrane r. The simulation of the SUB instruction is completed. In both cases, the simulation of the SUB instruction is correct, and the process can be iterated. Again, in each step, from each set R0 , R1 , R2 , R3 at most one rule can be used (hence the minimal parallelism is, in this case, similar to the maximal parallelism).
We start with label l0 in the skin region. If the computation in M halts, hence lh is reached, this means that the object lh is introduced in the skin region, and also the computation in Π halts. Consequently, N (M ) = N (Π), and this concludes the proof.
52
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
It remains as an open problem whether Theorem 1.40 can be improved by removing the polarizations of membranes and/or by decreasing the number of membranes. The previous system simulates the starting register machine in a deterministic manner, hence the result of the theorem remains true also for deterministic accepting P systems. We leave the obvious details as a task for the reader, and we pass to the important issue of using the membrane division as a tool of generating an exponential workspace in a linear time, and then using this space for addressing computationally hard problems. This possibility is successfully used for many problems in the maximal parallelism (we refer to [96] for details and references), and this remains pleasantly true for the minimal parallelism, at least for the SAT problem. The framework in which we prove this assertion is the following: we start from a given instance of SAT (using n variables and having m clauses), and we construct, in polynomial time, a P system Π with active membranes of a size polynomially bounded with respect to n and m, whose all computations halts in a polynomial time with respect to n and m; moreover, all the computations send to the environment either a special object yes or a special object no, and this happens in the last step. The propositional formula is satisfiable if and only if the object yes is released. Therefore, in the rigorous terminology of [96], we construct the system in a semi-uniform manner (starting from a given instance of the problem), and the system works in a weakly confluent way (we allow the system to evolve non-deterministically, but we impose that all the computations halt, and all of them provide the same answer). Improvements from these points of view (looking for uniform constructions, of deterministic systems) are left as open problems. In comparison with the existing similar solutions to NPcomplete problems from the membrane computing literature based on the maximal parallelism, we pay here a price for using the minimal parallelism: namely, we use the membrane division for non-elementary membranes. The issue whether or not the membrane division for only elementary membranes suffices remains also as an open problem. Theorem 1.41. The satisfiability of any propositional formula in the conjunctive normal form, using n variables and m clauses, can be decided in a linear time with respect to n by a P system with active membranes using rules of types (a), (b), (c), (e), and working in the minimally parallel mode. Moreover, the system is constructed in a linear time with respect to n and m. Proof. Let us consider a propositional formula σ = C1 ∧ · · · ∧ Cm , such that each clause Ci , 1 ≤ i ≤ m, is of the form Ci = yi,1 ∨ · · · ∨ yi,ki , ki ≥ 1, for yi,j ∈ {xk , ¬xk | 1 ≤ k ≤ n}. For each k = 1, 2, . . . , n, let us denote t(xk ) = {ci | there is 1 ≤ j ≤ ki such that yi,j = xk }, f (xk ) = {ci | there is 1 ≤ j ≤ ki such that yi,j = ¬xk }.
5. MINIMAL PARALLELISM
53
These are the sets of clauses which assume the value true when xi is true, respectively, false, respectively. We construct a P system Π with the following components (the output membrane is not necessary, because the result is obtained in the environment): O = ∪ ∪ ∪ ∪ H = µ = wg = ws =
{ai , ti , fi | 1 ≤ i ≤ n} {ci | 1 ≤ i ≤ m} {gi | 1 ≤ i ≤ 2n + 6} {ei | 1 ≤ i ≤ 2n + 1} {b, c, yes, no}, {s, e, g, 0, 1, 2, . . . , m}, [ s[ g ] g [ 0[ e ] e[ 1 ] 1[ 2 ] 2 . . . [ m ] m] 0] s, g1 , we = e1 , w0 = a1 , wi = λ, for all i = 1, 2, . . . , m,
and with the following rules (we present them in groups with specific tasks, also commenting their use). For the sake of readability, the initial configuration is given in Figure 1; initially, all the membranes have neutral polarizations. s'
0' g e a1 e1 g1 2 m 1
&
&
...
$
$
%
%
Figure 1. Initial Configuration of the System from Theorem 1.41 The membranes with labels g and e, with the corresponding objects gi and ei , respectively, are used as counters which evolve simultaneously with the “main membrane” 0 where the truth assignments of the n variables x1 , . . . , xn are generated; the use of separate membranes for counters makes possible the correct synchronization even for the case of the minimal parallelism. The evolution of counters is done by the following rules: [ g gi → gi+1 ] 0g , for all 1 ≤ i ≤ 2n + 5,
[ e ei → ei+1 ] 0e , for all 1 ≤ i ≤ 2n.
54
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
In parallel, membrane 0 evolves by means of the following rules: [ 0 ai ] 00 → [ 0 ti ] 00 [ 0 fi ] 00 , for all 1 ≤ i ≤ n,
[ 0 ti → t(xi )ai+1 ] 00 , and
[ 0 fi → f (xi )ai+1 ] 00 , for all 1 ≤ i ≤ n − 1,
[ 0 tn → t(xn )] 00 ,
[ 0 fn → f (xn )] 00 .
In odd steps, we divide the (non-elementary) membrane 0 (with ti , fi corresponding to the truth values true and false for variable xi ); in even steps we introduce the clauses satisfied by xi and ¬xi , respectively. When we divide membrane 0, all its inner objects and membranes are replicated; in particular, all the membranes with labels 1, 2, . . . , m, as well as membrane e are replicated, hence they are present in all the membranes with label 0. This process lasts 2n steps. At the end of this phase, all 2n truth assignments for the n variables are generated. In parallel with the division steps, if a clause Cj is satisfied by the previously expanded variable, then the corresponding object cj enters membrane j by means of the rule cj [ j ] 0j → [ j cj ] + j , 1 ≤ j ≤ m, thus changing the polarization of this membrane to positive. This is done also in step 2n + 1, in parallel with the following rule for evolving membrane e: [ e e2n+1 ] 0e → [ e ] 0e e2n+1 .
Thus, after 2n + 1 steps, the configuration of the system consists of 2n copies of membrane 0, each of them containing the membrane e empty, the object e2n+1 , possible objects cj , 1 ≤ j ≤ m, as well as copies of all the membranes with labels 1, 2, . . . , m; these membranes are either neutrally charged (if the corresponding clause was not satisfied by the truth assignments generated in that copy of membrane 0), or positively charged (in the case where the respective clause was satisfied – and hence one copy of cj is inside membrane j). Therefore, formula σ is satisfied if and only if there is a membrane 0 where all membranes 1, 2, . . . , m are positively charged. In order to check this last condition, we proceed as follows. In step 2n + 2 we use the rule [ 0 e2n+1 → bc] 00 ,
which introduces the objects b and c in each membrane 0. The latter object exits the membrane in step 2n + 3 by using the rule [ 0 c] 00 → [ 0 ] + 0 c,
thus changing the polarization of the membrane to positive. Simultaneously, b enters any of the membranes 1, 2, . . . , m with neutral polarization, if such
5. MINIMAL PARALLELISM
55
a membrane exists, by means of the rule b[ j ] 0j → [ j b] 0j , 1 ≤ j ≤ m.
Therefore, if b can “get hidden” in a membrane j, this means that the truth assignment from the respective membrane 0 has not satisfied all clauses. In the opposite case, where all clauses were satisfied (hence all the membranes 1, 2, . . . , m have positive polarizations), the object b waits one step in membrane 0. If object b is not blocked in an inner membrane, then in the next step (2n + 4) it exits membrane 0, by means of the rule + [ 0 b] + 0 → [ 0 ] 0 b.
This was not possible in the previous step, because the membrane had the neutral polarization. In the next two steps we use the rules [ s b → yes] 0s ,
[ s yes] 0s → [ s ] + s yes,
hence the object yes is sent to the environment in step 2n + 6, signalling the fact that the formula is satisfiable. If object b is blocked in an inner membrane in all the copies of membrane 0 (hence the propositional formula is not satisfiable), then no b is evolving in steps 2n + 4 and 2n + 5; in these steps, the counter g increases its subscript, hence in step 2n + 6 we can use the rule [ g g2n+6 ] 0g → [ g ] 0g g2n+6 .
This rule is also used when the formula is satisfied; however in that situation the skin membrane is changed to a positive polarization in step 2n + 6, and so the following rule cannot be used in step 2n + 7. If the formula is not satisfied, hence the skin membrane keeps its initial neutral polarization, then in step 2n + 7 we can use the rule [ s g2n+6 → no] 0s ,
and the object no is sent to the environment by means of the rule [ s no] 0s → [ s ] + s no.
Therefore, if the formula is satisfiable, then the object yes exits the system in step 2n + 6, and if the formula is not satisfiable, then the object no exits the system in step 2n + 8. In both cases, the computation halts. The system Π uses 7n + m + 11 objects, m + 4 initial membranes (containing 3 objects at the beginning of the computation), and 7n + 2m + 14 rules, all of these rules being of a length which is linearly bounded with respect to m (hence, the system can be constructed effectively in a time linearly bounded with respect to n and m). Clearly, all the computations stop after at most 2n + 8 steps, hence in a time linear with respect to n, and all give the same answer, yes or no, to the question whether formula
56
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
σ is satisfiable, hence the system is weakly confluent. These observations conclude the proof. Note also that the previous construction does not use rules of type (d), for membrane dissolution, and that the time for deciding whether a formula is satisfiable does not depend on the number of clauses, but only on the number of variables. The minimal parallelism remains to be considered for other classes of P systems, especially for systems with catalytic rules. From a mathematical point of view, it is also of interest to look for improvements of the results presented in Theorems 1.37, 1.39, and 1.40 in what concerns the number of membranes in the first and third theorems, and the weights of symport and antiport rules in the first two theorems – as it is done in a series of papers for the case of maximal parallelism. It also remains to be determined which of the results known for the maximal parallelism are also valid for the minimal parallelism, and which results can be re-proven in the new framework. Returning to the initial motivation of this research, a very interesting problem is to find mechanisms of “keeping under control” the power of P systems in order to obtain non-universal classes of systems. Minimal parallelism fails, the sequential use of rules is similarly restrictive as the maximal parallelism, non-synchronization is too loose and weak in order to be used in “programming” the P systems. What else to consider is a topic worth to investigate, and it might be useful to go back to biology and get inspiration from the living cell. 6. Membrane Transducers This section is based on the paper [CP3] where four classes of membrane transducers are introduced, namely arbitrary, initial, isolated arbitrary, isolated and initial. The transducers are called P transducers. The first two classes are universal (they can compute the same word functions as Turing machines), the latter two are incomparable with finite state sequential transducers, generalized or not. We study the effect of the composition, and show that iteration increases the power of these latter classes, also leading to a new characterization of recursively enumerable languages. The “Sevilla carpet” of a computation is defined for P transducers, giving a representation of the control part for these P transducers. We take into consideration an automata-like version of membrane systems with communication (e.g., symport/antiport systems). The sequence of symbols taken from the environment during a computation that terminates is the string recognized by the device. If several symbols are taken at the same time, then all their permutations are valid substrings of the recognized string. Such “P automata” were first introduced in [32] and then considered in a series of subsequent papers: [84, CG]. Universality results
6. MEMBRANE TRANSDUCERS
57
are obtained in most cases: all recursively enumerable languages are recognized in this way. Here we go one step further, and define a P transducer by considering an output in the form of a string. In order to distinguish which symbols exit and enter the system, we use special commands for introducing into the system or sending out symbols: (a, read) is possible only for reading and introducing a into the system from the input, and (a, write) means that a is written to the output, and cannot enter again the system. Moreover, (a, read) can appear only in multisets x having associated the command in, while (a, write) can appear only in multisets x having associated the command out. We also use promoters and inhibitors associated with the symport/antiport rules. Promoters are represented in the form of symbols which must be present in a membrane in order to allow the associated rule to be used. Inhibitors are represented in the form of symbols which must be absent from a membrane in order to allow the associated rule to be used. In their general form, such devices can compute all Turing computable string functions. We look for restricted cases, and a natural possibility is to restrict the input of the system to the symbols of the input string and nothing more, i.e., all symbols entering the system come from pairs (a, read). The restricted transducers are not universal; actually they cannot simulate even all finite state sequential transducers, although we can compute functions which are not computable by generalized sequential machines. This highlights the interesting problem of composing such devices, or of iterating them indefinitely. It is an open problem whether or not the composition increases the power, but this is true for iteration (without knowing whether we reach again the universality). We also briefly consider what we call the Sevilla carpet of a computation: the two-dimensional word obtained by recording in time the active membranes or the rules simultaneously used during a computation.
6.1. Transducers. The reader is assumed to have some familiarity with basic automata and language theory elements, e.g., from [99]. The length of a string x ∈ V ∗ is denoted by |x| and the number of occurrences of a ∈ V in a string x ∈ V ∗ is denoted by |x|a . The empty string is denoted by λ, and V + = V ∗ − {λ}. The prefix of length i of a string z ∈ V ∗ with |z| ≥ i, is denoted by prefi (z). The families of regular, contextfree, context-sensitive, and recursively enumerable languages are denoted by REG, CF, CS, and RE, respectively. For a family of languages F L, the family of length sets of languages in F L is denoted by N F L. Thus, N RE is the family of Turing computable sets of natural numbers. (Although REG ⊂ CF ⊂ CS ⊂ RE are all strict inclusion, for length sets we have N REG = N CF ⊂ N CS ⊂ N RE.) Several times we represent a string over V = {a1 , . . . , ak }, by its value in base k + 1. Specifically, if w = ai1 ai2 . . . ain , 1 ≤ ij ≤ k for all 1 ≤ j ≤ n,
58
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
then valk+1 (w) = i1 · (k + 1)n−1 + · · · + in−1 · (k + 1) + in . Note that valk+1 (w) = valk+1 (ai1 . . . ain−1 ) · (k + 1) + in . A generalized sequential machine (gsm) is a construct M = (K, I, O, s0 , F, R), where K, I, O are alphabets (set of states, the input and the output alphabets) such that K ∩ (I ∪ O) = ∅, s0 ∈ K (the initial state), F ⊆ K (the set of final states), and R is a finite set of rewriting rules of the form sa → xs′ , where s, s′ ∈ K, a ∈ I, and x ∈ O∗ (reading the input symbol a in state s, the machine passes to the next symbol of the input, outputs the string x, and enters the state s′ ). A gsm defines a partial mapping M from I ∗ ∗ to 2O as follows: for x ∈ I ∗ , we have M (x) = {y ∈ O∗ | s0 x =⇒∗ ys, s ∈ F }. If M (x) = ∅, then M (x) is undefined. Clearly, dom(M ) = {x ∈ I ∗ | M (x) is defined} is a regular language. This condition imposes a strong restriction on the mappings which can be computed by a gsm, and we will use this condition later. It is known that the families REG, CF, RE are closed under arbitrary gsm mappings, while CS is only closed under λ-free gsm mappings, where a gsm is λ-free if all rules sa → xs′ have x 6= λ. If we have |x| = 1 in all the rules sa → xs′ of a gsm M , then M is called a finite state sequential transducer (fsst, for short). Usually, the Turing machines are considered as string recognizers: we start with a string on the tape, in the initial state of the machine, and if we halt in a terminal state, then the string is accepted. The family of languages accepted in this way is RE. If the machine only uses a working space that is linearly bounded with respect to the length of the input string, then the recognized family is CS. We can also use a Turing machine to compute a string function. A variant is to say that a function is computed when its graph is recognized by a Turing machine. The domain of a Turing computable string function is any language from RE in the general case and a language from CS in the linearly bounded case. In the universality proofs from the subsequent sections we will use register machines; we refer to [77] for basic results in this area. An n-register machine is a construct M = (n, R, qs , qh ), where: • n is the number of registers, able to store natural numbers, • R is a set of labelled instructions of the form q1 : (op(r), q2 , q3 ), where op(r) is an operation on register r of M , and q1 , q2 , q3 are labels from a given set lab(M ) (the labels are associated in a oneto-one manner to the instructions), • qs is the initial/start label, and • qh is the final label. The machine is capable of the following instructions: – (inc(r), q2 , q3 ): Increment by one the content of register r and nondeterministically proceed to any of the instructions (labelled with) q2 and q3 .
6. MEMBRANE TRANSDUCERS
59
– (dec(r), q2 , q3 ): If register r is not empty, then decrement by one its contents and go to the instruction q2 , otherwise go to instruction q3 . – halt: Stop the machine. This instruction can only be assigned to the final label qh . A register machine M is said to compute a partial function f : N −→ N if, starting with any number m in register 1 and with instruction qs , M can stop at the final label qh with register 1 containing f (m); if the final label cannot be reached, then f (m) remains undefined. A register machine can also recognize an input number m ∈ N placed in register 1: m is accepted if and only if the machine can stop by the halt instruction. If we would need such a property, then we can also ensure that in the halting configuration all registers are empty. The register machines are known to be equal in power to Turing machines. For instance, the following useful proposition is known: Lemma 1.42. For any language L ∈ RE over the alphabet T with card(T ) = k there exists a register machine M such that for every w ∈ T ∗ , w ∈ L if and only if M halts when started with valk+1 (w) in its first register and in the halting step all registers of the machine are empty. We recall the definition of a P system with symport/antiport rules. Definition 1.43. A P system with symport/antiport rules is a construct Π = (V, T, µ, w1 , . . . , wm , E, R1 , . . . , Rm , io ) , where: • V is an alphabet whose elements are called objects (in what follows, the terms symbol and object are used as synonymous); • T ⊆ V is the terminal alphabet; • µ is a membrane structure with the membranes labelled by natural numbers 1, . . . , m in a one-to-one manner (m ≤ 1 represents the degree of the system); • w1 , . . . , wm are multisets over V associated with the regions 1, . . . , m of µ, represented by strings from V ∗ ; • E ⊆ V is the set of objects which are supposed to appear in an arbitrarily large number of copies in the environment; • R1 , . . . , Rm are finite sets of symport and antiport rules associated with the membranes 1, . . . , m; a symport rule is of the form (x, in) or (x, out), where x ∈ V + , with the meaning that the objects specified by x enter and respectively exit the membrane, and an antiport rule is of the form (x, out; y, in), where x, y ∈ V + , which means that the multiset x is sent out of the membrane and y is taken into the membrane region from the surrounding region; • io is the label of the output membrane in µ (it is an elementary membrane, that is, it contains no other membrane inside).
60
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
Starting from the initial configuration, which consists of µ and w1 , . . . , wm , E, the system passes from one configuration to another by applying the rules from each set Ri in a non-deterministic and maximally parallel way. The environment is inexhaustible, at each moment all objects from E are available in any number of copies we need. A sequence of transitions is called a computation; a computation is successful if and only if it halts. With a successful computation we associate a result, in the form of the number of objects from T in membrane io at the halting configuration. The set of all such numbers computed by Π is denoted by N (Π), and the family of all sets N (Π) computed by P systems with at most m membranes is denoted by N OPm (symr , antit ); we refer here to P systems with symport rules (x, in), (x, out) with |x| ≤ r, and antiport rules (x, out; y, in) with |x|, |y| ≤ t. We consider m ≥ 1 and r, t ≥ 0; r = 0 means that no symport rule is used, and t = 0 means that no antiport rule is used. We say that r, t are the weights of the symport and antiport rules, respectively. Proofs of the following results can be found in [87]. Theorem 1.44. N RE = N OP1 (sym1 , anti2 ) = N OP3 (sym2 , anti0 ) = N OP2 (sym3 , anti0 ). With the symport/antiport rules we can also associate promoters, in the form (x, in)|z , (x, out)|z , (x, out; y, in)|z , or inhibitors, in the form (x, in)|¬z , (x, out)|¬z , (x, out; y, in)|¬z , where z is a multiset of objects. These rules associated with membrane i are applied only if z is present or absent in the region of membrane i. The use of promoters z of length at most u is indicated by adding pu in front of sym, anti. The use of inhibitors z of length at most v is indicated by adding fv in front of sym, anti, respectively, in the notation of families N OPm (symr , antit ). The uncontrolled case corresponds to having p0 , f0 . A version of these systems accepting a language was considered in [84], following the model of [32]: a string w over the alphabet T is recognized by the analyzing P system Π if and only if there is a successful computation of Π such that the sequence of terminal symbols taken from the environment during the computation is exactly w. If more than one terminal symbol is taken from the environment in one step, then any permutation of these symbols constitutes a valid subword of the input string. The language of all strings w ∈ T ∗ recognized in this way by Π is denoted by A(Π). Note that in this case the output membrane io plays no role, and it can be omitted from the presentation of the system. The family of languages accepted by systems of degree at most m, using symport rules of weight at most r and antiport rules of weight at most t is denoted by ALPm (symr , antit ). When promoters or inhibitors are used, we add pu , fv in front of sym or anti, with the obvious meaning. In the version we present here, if a string w is recognized by Π, then its symbols should exist in E. Since each element of E appears in the environment in arbitrarily many copies, we cannot introduce a symbol of w into the system by using a symport rule, since an arbitrarily
6. MEMBRANE TRANSDUCERS
61
large number of copies would be introduced, hence the string will not be finite. Antiport rules are thus compulsory. In order to cope with this difficulty, and in order to recognize strings in a way closer to automata style, we consider a restricted mode called initial: take a string x = x(1)x(2) . . . x(n), with x(i) ∈ T, 1 ≤ i ≤ n; in the steps 1, 2, . . . , n of a computation we place one copy of x(1), x(2), . . . , x(n), respectively, in the environment (together with the symbols of E); in each step 1, 2, . . . , n we request that the symbol x(1), x(2), . . . , x(n), respectively, is introduced into the system (alone or together with other symbols); after exhausting the string x, the computation may continue, maybe introducing further symbols into the system, but without allowing the symbols of x to leave the system; if the computation eventually halts, then the string x is recognized. If the system halts before ending the string, or at some step i the symbol x(i) is not taken from the environment, then the string is not accepted. The language of strings accepted by Π in the initial mode is denoted by AI (Π). Proofs of the following results can be found in [84]. Theorem 1.45. RE = ALP1 (sym0 , anti2 ) = AI LP1 (sym0 , p1 anti2 ) = AI LP1 (sym0 , f1 anti2 ). Note that in the initial case, the universality is obtained at the price of using promoters or inhibitors. 6.2. P Transducers: Definitions and Examples. We now introduce the computing device we are going to investigate. Definition 1.46. A symport/antiport P transducer of degree m ≥ 1 Π = (V, µ, w1 , . . . , wm , E, R1 , . . . , Rm ),
is defined by • V is an alphabet of objects; • µ is a membrane structure with the membranes labelled by 1, 2, . . . , m (in what follows, the skin membrane will always have the label 1); • w1 , . . . , wm are multisets over V associated with the regions 1, . . . , m of µ, represented by strings from V ∗ ; • E ⊆ V is the set of objects which are supposed to appear in an arbitrarily large number of copies in the environment; • R1 , . . . , Rm are finite sets of symport and antiport rules associated with the membranes 1, . . . , m of µ. The rules from R1 contain both “unmarked” symbols a ∈ V , and “marked” symbols of the form (a, read) or (a, write), where a ∈ V . A marked symbol of the form (a, read) can only appear in multisets x such that (x, in) ∈ R1 or (y, out; x, in) ∈ R1 , while a marked symbol of the form (a, write) can only appear in multisets x such that (x, out) ∈ R1 or (x, out; y, in) ∈ R1 . Once the marked symbols
62
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
enter the system they become unmarked, and they can be used without restrictions by the other rules. The idea is that the symbols a appearing in the form (a, read) are read from the “input tape” of our device, while the symbols a appearing in the form (a, write) are written on the “output tape” of the device. A symbol a which appears in the form (a, read) is not present in the environment, but it is provided to the system as an input, from the “input tape”, at the request of a rule. Each rule of the form (x, in), (y, out; x, in) such that x contains marked symbols just“asks” to the input tape to provide the necessary symbols; this means that the translated string becomes available from left to right, in portions determined by the rules containing marked symbols. These rules are used in a sequential manner (each rule is applied only once in a step), but all marked symbols from x are provided at the same time. If several different rules need marked symbols at the same time, then all these symbols are assumed to become available on the input tape. Similarly, after sending a symbol a outside in the form (a, write), this symbol is not available to any rule which brings a inside; the symbol is written on the “output tape”, it does not become available in the environment. The rules sending symbols to the output tape are used in the standard parallel manner. For instance, if we have several copies of a in the skin region and only the rule ((a, write), out) can use the object a, then all these copies of a are “written” on the output tape at the same time. Thus, the functioning of a P transducer as above is as usual in P automata, with the differences that we also have an output, the input and the output use marked symbols only, and the rules asking for marked symbols are used in a sequential manner. We start in the initial configuration (the one defined by µ, the multisets w1 , . . . , wm , and the set E), we use the rules in the nondeterministic maximally parallel manner, and in this way we get transitions between the system configurations; when we terminate we have an output. In this way we associate an output to an input. An input string is a sequence of symbols a ∈ V introduced into the system as marked symbols of the type (a, read), an output string is a sequence of symbols a ∈ V which are sent out of the system in the form of marked symbols (a, write). When several symbols are taken or expelled at the same time, any permutation of them is accepted as a subword of the input or output string, respectively. Because of this “local commutativity” and because of the nondeterminism, to the same input w ∈ V ∗ we associate a set of strings Π(w) consisting of all output strings which are obtained at the end of halting computations which have w as input. The symport/antiport rules of a transducer can also have permitting or forbidding conditions. In what follows, we always use promoters which consist of one copy of a single object. In a system with conditional rules we can also use rules without promoters or inhibitors, and they are applied in the usual way.
6. MEMBRANE TRANSDUCERS
63
An initial version of such transducers can be considered in the same way as the initial automata restrict general P automata: we request that, in order to recognize a string w = w(1)w(2) . . . w(n), where w(i) ∈ V, 1 ≤ i ≤ n, in each of the first n steps of a computation we introduce into the system one symbol of w; namely, w(i) is introduced in step i by a rule which contains (w(i), read). At the same time or after introducing the symbols of w, further symbols are taken from the environment, but not in a marked form, hence they do not count to the input. No restriction is imposed on the way the output is produced. Both for the general and the initial P transducers one can consider one further restriction: a transducer is said to be isolated if it takes no other symbol from the environment, the only input consists of the symbols of the string to be translated. For the sake of consistency with the terminology, in an isolated system we also forbid sending objects to the environment, excepting that marked objects are sent to the output tape. Before starting to investigate the power of these four types of transducers (arbitrary, initial, arbitrary isolated, initial isolated), we discuss some examples which both illustrate the definitions and give an indication on the power of these machines. Example 1.47. The first transducer (we denote it by Π1 ) is very simple: V = {a}, one membrane, and two rules: (a(a, read), in), ((a, write)2 , out). Clearly, each string an , n ≥ 0, is translated into a2n . In each step one a is taken from the tape and one from the environment, and in the next step both these symbols are sent to the output tape. This mapping can be also computed by a gsm, but not by an fsst. Example 1.48. Consider now a more complex system: Π2 V R1 R2 R3
(V, [ 1 [ 2 ] 2 [ 3 ] 3 ] 1 , d, eg, cf Z, ∅, R1 , R2 , R3 ), {a, b, c, d, e, f, g, Z}, {((a, read), in)|d , ((b, read), in)|d , ((a, write), out)|g }, {(cd, in), (ab, in)|d , (e, out)|d , (Z, in), (Z, out), (f, in), (a, out)|f , (g, out)|f }, = {(c, in), (c, out), (e, in), (a, in)|e , (b, in)|e , (aZ, out), (bZ, out), (f, out)|e }.
= = = =
For an easier understanding of the functioning of this system, we give it also in a pictorial form, in Figure 2, with the rules placed near the membranes with which they are associated. As long as d is present in the skin membrane, symbols a and b are brought into the system; in its turn, d stays in the skin region any even number of steps, while the symbol c oscillates through membrane 3. After introducing c, d in membrane 2, by means of the rule (cd, in) ∈ R2 , no further input is possible. However, when d is in membrane 2, the rule (ab, in)|d ∈ R2 can be used; it checks whether or not the number of copies of a is equal to the number
64
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
1' 2# eg
$
d
(cd, in) (ab, in)|d (e, out)|d (Z, in) (Z, out) (f, in) (a, out)|f (g, out)|f
"! &
3#
(c, in) cf Z (c, out) (e, in) (a, in)|e (b, in)|e (aZ, out) (bZ, out) (f, out)|e
"!
((a, read), in)|d ((b, read), in)|d ((a, write), out)|g
%
Figure 2. An Example of a P Transducer
of copies of b, in the following way. If d is present in membrane 2, then e can exit this membrane. From the skin region, e immediately enters membrane 3, and this makes possible the use of rules (a, in)|e , (b, in)|e (because of the maximality of the parallelism of using the rules). We cannot have both copies of a and of b remained in the skin region, because all possible pairs ab were sent to membrane 2. Thus, if any of the rules (a, in)|e , (b, in)|e is used, then either a or b enters membrane 3 and releases from here the trap-object Z, which will pass forever through membrane 2, hence the computation will never halt. If no copy of a or b is present, then Z remains “locked” in membrane 3. In each of these possibilities, e enters membrane 3, and this releases from here the symbol f . This symbol goes immediately to membrane 2, and this makes possible the exit of the symbol g, as well as of all symbols a. With g in the skin region, all symbols a are written on the output tape. Consequently, the computation stops if and only if the input string was of the form w ∈ {a, b}∗ with |w|a = |w|b , and in such a case the output is an , for |w| = 2n. That is, for w ∈ {a, b}∗ , Π2 (w) = an , if |w|a = |w|b = n, and it is undefined otherwise. Therefore, the domain of this mapping is not a regular language, which means that it cannot be computed by a gsm. It is easy to modify the previous transducer in order to map a string w ∈ {a1 , a2 , a3 , a4 }∗ to all x ∈ {a1 , a2 }∗ such that |x|a1 = |w|a1 , |x|a3 = |w|a3 if and only if |w|a1 = |w|a2 and |w|a3 = |w|a4 (the mapping is undefined otherwise). Such a transducer has a non-context-free domain. Note that the transducer Π2 is isolated and works in the initial mode.
6. MEMBRANE TRANSDUCERS
65
Example 1.49. Consider also the transducer Π3 = ({a, b, d, f, g}, [ 1 [ 2 ] 2 ] 1 , df, g, {b}, R1 , R2 ), R1 = {(d, out), (d, in), ((a, read)b, in)|f , ((a, write), out)|f , ((b, write), out)|g }, R2 = {(d, in), (f, in)|d , (g, out)|f }.
As long as d oscillates through membrane 1, f is present in the skin region, hence we bring inside copies of a from the tape together with paired b’s from the environment. Then symbol a exits, and b remains inside. After introducing d in membrane 2, f should be also introduced here; this stops the reading of the tape and makes possible the exit of g from membrane 2. In the presence of g, all copies of b accumulated in the system are written in the output. Therefore, we translate any string an , n ≥ 0, into an bn . This system is no longer isolated, but it works in the initial mode. The previous system can be changed in order to translate an to an bn cn . We can bring together with a both one b and one c, send out b in the presence of g and c in the presence of one further promoter, which is released from membrane 2 one step after releasing the object g; the details are left to the reader. Example 1.50. The next transducer (working in the initial mode) is again rather simple: Π4 = ({a, b, c, d, e}, [ 1 [ 2 ] 2 ] 1 , d, de, ∅, R1 , R2 ), R1 = {((a, read), in)|d , ((b, read), in)|d , ((c, read), in)|d , ((a, write), out)|e , ((b, write), out)|e , ((c, write), out)|e }, R2 = {(d, out; d, in), (e, out; d, in)}.
In the presence of d we introduce any string over {a, b, c} in the system, then, when e is brought out of membrane 2, all symbols are sent out at the same time, which means that any permutation of the input string is produced. Note that the system is isolated and that the permutation of a regular language over {a, b, c} is not necessarily a context-free language. Example 1.51. We conclude with a transducer which computes the mapping h : V ∗ → V ∗ , defined by a2 a1 . . . a2i a2i−1 . . . a2n a2n−1 , if w = a1 a2 . . . a2n , n ≥ 1, h(w) = undefined, if |w| is an odd number. Such a P transducer is
Π5 = (V ∪ {b, c, d}, [ 1 [ 2 ] 2 ] 1 , b, cd, ∅, R1 , R2 ), R1 = {((a, read), in)|b , ((a, read), in)|c , ((a, write), out)|d , ((a, write), out)|b | a ∈ V }, R2 = {(a, in)|d , (a, out) | a ∈ V } ∪ {(c, out; b, in), (d, out; c, in), (b, out; d, in), (b, in)}.
66
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
One can see how the cyclic presence of the auxiliary symbols b, c, d in the skin membrane ensures the fact that the neighboring symbols of the input string are interchanged; this takes three steps, with no input taken in the presence of d. The computation stops only if b enters the innermost membrane by the rule (b, in) ∈ R2 at a moment when no symbol from V is present in the system; otherwise, such a symbol a will pass forever through membrane 2 by means of the rules (a, out), (a, in)|d from R2 . Consequently, h(w) = Π(w) for w ∈ V ∗ . Let us consider a specific notation. The families of (partial) string functions defined by fsst’s, λ-free gsm’s, arbitrary gsm’s, Turing machines, and linearly bounded Turing machines are denoted by F SST, GSM, GSM λ , T T M, TLB T M , respectively. The families of (partial) string functions defined by P transducers of the general form, with at most m ≥ 1 membranes, using symport rules of weight at most r ≥ 0 and antiport rules of weight at most t ≥ 0, is denoted by T Pm (symr , antit ). When we use promoters (inhibitors) for symport (antiport) rules we add p (resp. f ) in front of sym (anti); always we use only one symbol as promoter or inhibitor, hence we add no subscript to p or to f . When one of the parameters m, r, t is not bounded, we replace it with ∗; the case r = 0 (t = 0) corresponds to the fact that we do not use symport (resp. antiport) rules. If we work in the initial mode, then we write TI P instead of T P , and when we use only isolated systems, then we add I in front of T P . The union of all families XPm (psymr , pantit ), for all m, r, t and X ∈ {T, TI , IT, ITI }, is denoted by XP . Note that these families are considered here only for the case of using promoters, but similar families can be considered for the case of inhibitors. With these notations, the examples lead to the following results: Remark 1.52. ITI P3 (psym2 , anti0 ) − GSM λ 6= ∅ (from Example 1.48). Remark 1.53. The families REG and CF are not closed under P transducers of the type ITI P2 (sym1 , anti1 ) or more complex (from Example 1.50). It is easy to see that any projection is in the family ITI P1 (sym1 , anti0 ): just output immediately some symbols and keep inside those which we want to erase. Since each RE language is the projection of a language from CS, we get: Remark 1.54. The family CS is not closed under mappings from ITI P1 (sym1 , anti0 ). In the case of isolated systems, it is clear that the output string can be longer than the input string with at most a constant number of symbols. This constant is bounded by the number of symbols present in the system at the beginning of the computation, that is, in the initial multisets present in the membrane structure. Such a property is not true for mappings from TI P , or for mappings from GSM . Moreover, the mapping h(an ) = bn , n ≥ 1 (undefined otherwise), cannot be computed by an isolated transducer: we
6. MEMBRANE TRANSDUCERS
67
TTM
y X X 6 XXX X
r
XX
XX
TP
TLB T M
X y X 6 XXXXr r
r
GSM λ 6 r
GSM 6 r
@ I @
@
TI P
IT P
7 J ] J J J r
ITI P
F SST
Figure 3. Relationships Among Families of Transduction Mappings
do not have inside the system enough copies of b. On the other hand, such a mapping is in F SST . Consequently, each of the families F SST, GSM , and GSM λ is incomparable with each of the families IT P, ITI P . Directly from the definitions we also have the relations from the diagram from Figure 3 in which the arrows indicate inclusions, while the dotted arrows indicate strict inclusions. Some of these relations will be improved in the subsequent section. Although not all fsst’s can be simulated by an isolated P transducer, those which do not require “too many” new symbols can. We say that an fsst M = (K, I, O, s0 , F, R) is with bounded change if there is a constant rM such that whenever w ∈ M (z), for all 1 ≤ i ≤ |w| we have |prefi (w)|a ≤ |prefi (z)|a +rM for all a ∈ O. We denote by F SSTbc the family of mappings computed by fsst’s with bounded change.
Theorem 1.55. F SSTbc ⊂ ITI P2 (psym1 , anti3 ). Proof. For M = (K, I, O, s0 , F, R) let rM be the above mentioned “bounded change constant”. Denote I ′ = {a′ | a ∈ I}. Denote also by str(K) the string obtained by concatenating all states from K, and by str(rM ) a string such that |str(rM )|a′ = 1 for each a ∈ I, and
68
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
|str(rM )|a = rM for each a ∈ I. We construct the P transducer Π = (V, [ 1 [ 2 ] 2 ] 1 , s0 , str(K)str(rM )f ZZ, ∅, R1 , R2 ),
V = K ∪ I ∪ I ′ ∪ O ∪ {f, Z}, R1 = {((a, read), in)|s0 | s0 a → bs ∈ R} ∪ {((a, read), in)|s , ((b, write), out)|s′ | sa → bs′ ∈ R} ∪ {((b, write), out)|f | b ∈ O}, R2 = ∪ ∪ ∪ ∪
{(s′ ba′ , out; s, in) | sa → bs′ ∈ R} {(a′ a, in) | a ∈ I} {(f ba′ , out; s, in) | sa → bs′ ∈ R, s′ ∈ F } {(Z, out; a′ , in) | a ∈ I} {(Z, out; Z, in)}.
The rules (s′ ba′ , out; s, in) from R2 release to the skin region the state s′ and the output object b, in exchange of s, corresponding to a rule sa → bs′ ∈ R; at the same time the same rule releases the object a′ which brings to membrane 2 the input symbol a. The symbol b will exit immediately from the system (in the presence of state s′ ). In this way, one correctly simulates the transitions in M . When a terminal state is reached, we bring from membrane 2 the symbol f , which allows the corresponding symbol b to exit the system and the computation halts (in the presence of f no further symbol is brought into the system; simultaneously, the symbols a, a′ from the skin region enter membrane 2). If at any moment we cannot bring from the environment the right symbol a and then we cannot use a rule (a′ a, in) ∈ R2 , then the symbol a′ is available in the skin region and we have to use the rule (Z, out; a′ , in) ∈ R2 ; the computation will never stop. Consequently, for any w ∈ I ∗ , we have M (w) = Π(w), and this completes the proof. 6.3. Universality of the General Case. As expected, P transducers with symport/antiport rules and promoters are very powerful, so it is no surprise to have the equality T T M = T P . Not so expected is the fact that even in the initial mode we have universality, even when the membrane structure of the used systems is minimal. Theorem 1.56. T T M = TI P1 (sym0 , panti3 ). Proof. We only have to prove the inclusion ⊆. Consider a Turing machine T M which defines a partial mapping from some alphabet V to ∗ 2V . We construct a P transducer Π which works as follows. Assume that V = {b1 , . . . , bk }. We introduce each string w ∈ V ∗ in the system, codified by k numbers, val2 (hi (x)), 1 ≤ i ≤ k, where hi : V ∗ −→ {bi , b0 }∗ is defined by hi (bi ) = bi , and hi (bj ) = b0 for all j 6= i (if each bi is identified with 1 and b0 with 0, then val2 (hi (x)) is the binary value of hi (x)). This operation can be done by starting with the objects cd in the
6. MEMBRANE TRANSDUCERS
69
system, and with c also outside the system, in the environment (c, d ∈ / V ), by using the following rules: (d, out; d(bj , read)aj , in)|c , for all 1 ≤ j ≤ k, (c, out; c, in), (aj , out; a2j , in)|c , (c, out; qs (bj , read)aj , in), for all 1 ≤ j ≤ k. This phase proceeds as follows. At each step, c just passes back and forth through the skin membrane. In parallel, d brings the symbols bj of our string inside the system, together with a copy of the corresponding aj . The used rule, (d, out; d(bj , read)aj , in)|c is promoted by the object c, always present in the system. In each step, each copy of each object ar present in the system is multiplied by 2 by using the rule (ar , out; a2r , in)|c , 1 ≤ r ≤ k, in parallel for all values of r; also these rules are applicable only in the presence of c. If the string we want to recognize is exhausted when using these rules, then the computation will last forever, for instance, by using the rule (c, out; c, in). Therefore, at some moment we have to use the rule (c, out; qs (bj , read)aj , in) in parallel with the rules (ar , out; a2r , in)|c . Because the object c is sent into environment, from now on the rules (c, out; c, in), (ar , out; a2r , in)|c , and (d, out; d(bj , read)aj , in)|c can no longer be used. This means that in the moment of using the rule (c, out; qs (bj , read)aj , in) we have to exhaust the string w (the symbol bj is the last one of the string). In this moment, we know the string, because we have in the system all numbers val2 (hi (x)), 1 ≤ i ≤ k. The mapping v : Nk −→ N defined by v(val2 (h1 (x)), . . . , val2 (hk (x))) = valk+1 (x), x ∈ V ∗ , is clearly recursive, hence it can be computed by a register machine M1 . Assume that the initial instruction of this register machine has the label qs , the one introduced by the rule (c, out; qs (bj , read)aj , in). The machine M1 can be simulated in our system by antiport rules of weight 2 (without promoters). A general way to pass from a register machine M to a set of rules in a P system which simulates M can be used, as those involved in the proofs of Theorems 1.44, 1.45. We recall here the basic construction for the case of using antiport rules only (of weight 2); this R2P construction will be several times used below. Consider a register machine M = (n, R, qs , qh ). With a register r we associate the object ar such that the natural number present in this register at a given time is represented by the number of copies of ar present in the skin membrane of the P system which simulates M . Moreover, the labels of instructions are considered objects of our P system. Then, the instructions from R can be simulated by antiport rules as follows: • An increment-instruction q1 : (inc(r), q2 , q3 ) is simulated by the rules (q1 , out; q2 ar , in), (q1 , out; q3 ar , in).
70
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
• A decrement-instruction q1 : (dec(r), q2 , q3 ) is simulated by the following rules: (q1 ar , out; q2 , in), (q1 , out; q1′ q1′′ , in), (q1′ ar , out; f, in), (f, out; f, in), (q1′′ , out; q1′′′ , in), (q1′ q1′′′ , out; q3 , in). The condition of maximal parallelism guarantees that the rule (q1′ ar , out; f, in) is applied in parallel with (q1′′ , out; q1′′′ , in), which leads to a non-halting computation by the introduction of the trapsymbol f. Only if in the current configuration no symbol ar is present in the skin membrane, the object q1′ can wait one step for being used in the rule (q1′ q1′′′ , out; q3 , in) together with the symbol q1′′′ introduced by the rule (q1′′ , out; q1′′′ , in). • The halting instruction qh : halt is simulated by just having no rule which involves the symbol qh .
The overall simulation of M starts by making sure that the object qs (the initial label of M ) is present in the system. Its appearance will act as a trigger of the computation in M . The object qs together with a number of copies of the object a1 (the number which we want to recognize by the register machine) make the machine to work and produce the symbol qh . Thus, assume that we have simulated in Π the register machine M1 , and we have reached the halt instruction of M1 . Now, the mapping defined by the string processing Turing machine T M corresponds to a numerical mapping, g : N −→ 2N such that m ∈ g(n) if and only if z ∈ T M (w), for valk+1 (w) = n and valk+1 (z) = m. This mapping g can be computed by a Turing machine, hence also by a register machine M2 . Making sure that from the halt instruction of the register machine M1 we pass to the initial instruction of M2 , via the R2P construction we can also ensure that the work of M2 is simulated by our system Π. If T M (w) is defined, then M2 stops, hence Π introduces the label of its halt instruction. What remains to do is to “decode” a string z from its numerical representation valk+1 (z). If z = bi z ′ , for some bi ∈ V, z ′ ∈ V ∗ , then there is a register machine M3 which starts from valk+1 (z) and produces valk+1 (bi ) = i in the form of i copies of a certain symbol ar corresponding to a register r, and valk+1 (z ′ ) in form of symbols aj with j 6= r. From valk+1 (z ′ ) we will repeat the procedure. The machine M3 can be simulated in our system, in the same way we have done with M1 and M2 . Thus, in order to conclude the construction we have to show how the system will produce the symbol bi starting from i copies of some symbol ar . This can be done as follows. Assume that when M3 halts we have a symbol h in the system, together with the i copies of ar ; assume also that we have a copy of a symbol d inside the system, as well the symbols c1 , c2 , . . . , ck corresponding to the k symbols of V . We also need in the environment the symbols d, g, b1 , . . . , bk , Z. Consider
6. MEMBRANE TRANSDUCERS
71
the following rules: (ar c1 d, out; d, in)|h , (ar cj , out; cj−1 , in)h , 2 ≤ j ≤ k (ar cj h, out; cj−1 gbj , in), 2 ≤ j ≤ k, (ar c1 h, out; gb1 , in), (ar , out; Z, in)|g , (hcj , out; Z, in), 1 ≤ j ≤ k, (Z, out; Z, in), (g(bj , write), out; dcj qs , in), 1 ≤ j ≤ k.
In the presence of h, we send out one copy of ar , together with c1 . We can continue (in the presence of h), sending c2 out together with ar , and bringing back c1 , and so on. At each step only one rule is used. If we exhaust the copies of ar , then we have to use the rule (hcj , out; Z, in), and the computation will never halt, because Z can pass forever through the unique membrane of the system. Thus, at some time we have to send h out by means of the rule (ar cj h, out; cj−1 gbj , in). If this copy of ar is not the last one in the system, then at the next step we have to use (ar , out; Z, in)|g , and the computation never stops. If this was the last copy of ar , then we continue by sending bj to the output tape, and restoring the symbols necessary inside: we bring back d and cj , as well as the label of the starting instruction of the register machine M3 which continues to deliver the symbols of the output string. The case k = 1 is handled by the rule (ar c1 h, out; gb1 , in). When the string is finished, we will remain with no objects ar in the system, and the computation will stop. Some missing technical details are left as an exercise to the reader. Finally the system Π computes exactly the mapping computed by T M , and this concludes the proof. Corollary 1.57. T T M = T P = TI P. Together with this result, the diagram from Figure 3 changes as in Figure 4, where all inclusions are proper, with the exception of ITI P ⊆ IT P , which is not known to be proper; the non-related families are incomparable. We conjecture that also the inclusion ITI P ⊆ IT P is proper. 6.4. The Isolated Case; Composition, Iteration. The isolated P transducers have a rather eccentric position with respect to usual transducers, being incomparable with finite state sequential transducers. The fact that isolated P transducers are not universal raises the question of whether we can get an increase of their power by composing them, or even iterating them arbitrarily many times. For fsst’s and gsm’s, it is known that the composition does not increases the power (the families F SST, GSM, GSM λ are closed under composition), while by iterating an erasing gsm we get universality modulo an intersection with a regular language. Each language L ∈ RE can be obtained starting from a language consisting of a single
72
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
word of length one and iterating a gsm with a small number of states, and intersecting the result with a language of the form T ∗ , where T is an alphabet. T T M = T P = TI P 6 r
TLB T M
r
>
GSM λ 6 r
Z } Z
Z
Zr
IT P 6
GSM 6 r
ITI P
F SST Figure 4. Relationships Among Families of Transduction Mappings It is an open problem whether or not the families of isolated P transductions are closed under composition. Note that we have two cases, of general and of initial P transducers. We settle here the question for iteration: in both cases, the iteration increases the power. Moreover, we get again a characterization of recursively enumerable languages. ∗ V∗ ∗ ∗ S Fori a mapping h : V ∗ −→ 2 ∗ we denote by h the mapping h (w) = = {h | h ∈ IT P }. It is assumed here that i≥1 h (w). Then, IT P when iterating a transducer Π, it is the same at each iteration and only the input is changed. That is, the transducer contains the same initial multisets of objects inside at each iteration; we say that the device is reset after each iteration. Initially we input a string x, then we input any string from Π(x), then any string from Π(Π(x)), and so on, collecting the results of any computation that terminates. Theorem 1.58. ITI P ∗ − IT P 6= ∅. Proof. Let us consider the following system: Π = (V, [ 1 [ 2 ] 2 [ 3 ] 3 ] 1 , c′ , gg ′ Z, bccdef g, ∅, R1 , R2 , R3 ),
V = {a, b, c, c′ , d, e, f, g, g ′ , Z}, R1 = {((a, read), in)|c′ , ((a, read), in)|c , ((b, read), in)|c , ((a, write), out)|c , ((b, write), out)|c , ((b, write), out)|d , ((a, read), in)|g , ((b, read), in)|g , ((e, write), out), ((a, write)(b, write), out)|g′ },
6. MEMBRANE TRANSDUCERS
73
R2 = {(g, out; g, in), (g ′ , out; g, in), (Z, out; g ′ a, in), (Z, out; g ′ b, in)}, R3 = {(f c, out; c′ , in), (c, out; c, in), (bd, out; c, in), (f ge, out; c′ , in), (Z, in), (Z, out)}. In order to see how this transducer works, we represent it graphically in Figure 5. 1'
2# gg ′ Z
c′
$
((a, read), in)|c′ ((a, read), in)|c ((b, read), in)|c bccdef g ((a, write), out)|c ((b, write), out)|c ′ (f c, out; c , in) ((b, write), out)|d (c, out; c, in) ((a, read), in)| g (g, out; g, in) (bd, out; c, in) ((b, read), in)| ′ g (g ′ , out; g, in) (f ge, out; c , in) ((a, write)(b, write), out)|g′ (Z, out; g ′ a, in) (Z, in) ′ ((e, write), out) (Z, out; g b, in) (Z, out)
"! &
3#
"!
%
Figure 5. A P Transducer for which Iteration Helps
The system has two kinds of behaviours: • When c is present in the skin region. This means that in the first step we use the rule (f c, out; c′ , in) ∈ R3 , and then we read any string over {a, b} and append one more b to its right hand end. • When g is present in the skin region. This means that in the first step we use the rule (f ge, out; c′ , in) ∈ R3 , and then we check whether the input string has the same number of occurrences of a and b and we output it, marked to the left with e, only in the affirmative case. Such a string, starting with e, cannot be further processed, hence the iteration cannot be continued. In the first case, each a and each b of the input is immediately sent out. After processing correctly and finishing the input string, we send c to membrane 3 in exchange of bd; b exits immediately and the computation stops. In the second case, all copies of a and b are collected in the system in the presence of g. After finishing the input string, g enters membrane 2 by exchange with g ′ . At the same time, g ′ allows the use of the rule ((a, write)(b, write), out)|g′ , which sends out pairs of symbols a, b, and, if any unpaired a or b remains, makes necessary the use of one of the
74
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
rules (Z, out; g ′ a, in), (Z, out; g ′ b, in) ∈ R2 . It is possible to use the rules (Z, out; g ′ a, in), (Z, out; g ′ b, in) instead of ((a, write)(b, write), out)|g′ for the same copies of a, b, but this will lead to an endless computation. If Z is released from membrane 2, then it will pass forever through membrane 3. Therefore, the computation stops if and only if we have the same number of copies of a and of b. Note that the transducer works in the initial mode. Consequently, for all n ≥ 1, we have Π∗ (an ) = {an bm | m ≥ 1} ∪ {ew | w ∈ {a, b}∗ , |w|a = |w|b = n}.
Such a mapping cannot be computed by an isolated P transducer, irrespective of the mode of working, initial or not, because such a device only increases the length of the input with a constant. Corollary 1.59. IT P ∗ − IT P 6= ∅, ITI P ∗ − ITI P 6= ∅. The proof of Theorem 1.58 is slightly more complex than necessary, but the aim was to suggest a possible way to prove that, modulo an intersection with a regular language of the form {e}T ∗ , where T is an alphabet, and the erasing of a single symbol e, iteration leads to universality. By making use of “controllers” like c, g, we can switch between a phase when we accumulate enough auxiliary symbols in the iterated string, without losing the initial string, which remains always in the prefix of the current string, and then, in the presence of g, we can simulate a Turing machine working as a transducer. This may possibly be done as in the proof of Theorem 1.56, using register machines. Such a construction looks possible, but rather complex. Does it really work? Anyway, if we do not aim to get the same power as TTM modulo some further transformations, but to characterize RE, then iteration helps. Theorem 1.60. Each language L ∈ RE, L ⊆ T ∗ , can be written in the form L = T ∗ ∩ h∗ (L0 ), where h ∈ ITI P2 (psym2 , anti3 ) and L0 is a singleton language. Proof. Let us consider a type-0 Chomsky grammar G = (N, T, S, R) in the Kuroda normal form, that is, with the rules from R of one of the forms: A → BC, A → a, A → λ, AB → CD, for A, B, C, D ∈ N, a ∈ T . Without loss of generality, we may assume that always the four symbols A, B, C, D from each rule AB → CD from R are distinct. We assume the rules from R labelled in a one-to-one manner, pi : u → v, for i = 1, 2, . . . , n. We construct the P transducer Π = (V, [ 1 [ 2 ] 2 ] 1 , w1 , w2 , ∅, R1 , R2 ), in the following way. The idea is to start from L0 = {##S##}, where S is the axiom of G and # is a new symbol, and at each iteration of Π to simulate one rule of R. To this aim, we only need a limited supply of symbols, and such symbols can be provided in the initial multisets w1 , w2 of Π. In this way, we can simulate
6. MEMBRANE TRANSDUCERS
75
any derivation in G. At the last iteration, we also erase the markers # (and we can also check whether any nonterminal is still present). The intersection with T ∗ selects from all strings obtained from ##S## at various iterations the terminal string obtained in the end. The components of Π are V
= N ∪ T ∪ {c, f, t, Z, #} ∪ {pi , p′i , p′′i , p′′′ i | pi : A → α ∈ R, α ∈ T ∪ {λ}}
iv ∪ {pi , p′i , p′′i , p′′′ i , pi | pi : AB → CD ∈ R} ∪ {pi , p′i , hp′′i , α1 α2 , α3 i, hp′′′ i , α1 α2 , α3 i | pi : A → BC ∈ R, where α1 , α2 , α3 ∈ N ∪ T }.
Then, w1 = c, and w2 contains all symbols from V with the exception of c, and each pi , 1 ≤ i ≤ n, as well as Z, appears twice. The symbols pi , with or without primes, are used for controlling the simulation of the associated rule from R. This simulation is more difficult in the case of rules which increase the length of the string, where the complex symbols hp′′i , α1 α2 , α3 i were used; the “meaning” of the sequence of symbols α1 , α2 , α3 is that we have to output α2 , α3 in the next two steps, in this order, and then to allow reading α1 in the third step. If we output two symbols at a time, both their orderings are valid for a P transducer, but not for the starting grammar. We present now the rules from R1 and R2 for the four types of rules from R, also explaining how these rules are simulated in Π. Let us denote U = N ∪ T ∪ {#}. For each rule pi : A → a ∈ R, we have the following rules in R1 and R2 : R1 ((α, read), in)|pi , α ∈ U, ((α, write), out)|pi , α ∈ U, ((α, write), out)|p′i , α ∈ U, ((A, read), in)|p′i , ((α, read), in)|p′′i , α ∈ U, ((a, write), out)|p′′i , ((α, read), in)|p′′′ , α ∈ U, i ((α, write), out)|p′′′ , α ∈ U, i
R2 (f pi , out; c, in), (pi , out; pi , in), (p′i , out; pi , in), (p′′i a, out; p′i , in), ′′ (p′′′ i , out; pi A, in), (Z, out; p′′i , in).
First, R1 contains the rules ((#, read), in)|c , ((#, write), out)|c , and R2 contains the rule (Z, out; Z, in). These rules from R1 are used only for ensuring that the work of Π is initial and at each step, including the first ones, we read something from the input tape. The mentioned rule of R2 is a trap-rule, used only if the computations goes in a “wrong” way. The object pi is brought out of membrane 2 in a nondeterministic way, by means of the rule (f pi , out; c, in). As long as it is present in the skin region, any symbol from the input is read and immediately written in the output. During this time we use the rule (pi , out; pi , in) ∈ R2 , which makes use of the two copies of pi . At some moment, we change pi with p′i . The
76
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
symbol read at the previous step (possibly #) is written out, and the next symbol to be read should be A, the left hand member of the rule from R with label pi in G. At the same step when A is supposed to be read, the rule (p′′i a, out; p′i , in) ∈ R2 is used, which releases the symbol a from membrane 2. We have such symbols from the beginning. In the next step, a is sent to the output tape and the simulation of the rule is completed, one more symbol from the input tape is read, while from R2 we have to ′′ use either the rule (p′′′ i , out; pi A, in) whenever A was present in the skin membrane and this ensures the correct simulation of the rule from R, or the rule (Z, out; p′′i , in) whenever A is not present. In the latter case, of a computation in Π which does not correspond to correctly simulating a rule from R, the computation will never end, because Z will oscillate through membrane 2. After simulating the rule, in the presence of p′′′ i we again read each input symbol and write it out immediately, until exhausting the input. For each rule pi : A → λ ∈ R we introduce the same rules, with the following two changes: the rule ((a, write), out)|p′′i ∈ R1 is missing, and instead of (p′′i a, out; p′i , in) ∈ R2 we use the rule (p′′i , out; p′i , in). No symbol is produced, and in one step we output nothing. For pi : AB → CD ∈ R we introduce the following rules in R1 , R2 :
R1 ((α, read), in)|pi , α ∈ U, ((α, write), out)|pi , α ∈ U, ((α, write), out)|p′i , α ∈ U, ((A, read), in)|p′i , ((B, read), in)|p′′i , ((C, write), out)|p′′i , ((D, write), out)|p′′′ , i ((α, read), in)|p′′′ , α ∈ U, i ((α, read), in)|piv , α ∈ U, i ((α, write), out)|piv , α ∈ U,
R2 (f pi , out; c, in), (pi , out; pi , in), (p′i , out; pi , in), (p′′i C, out; p′i , in), ′′ (p′′′ i D, out; pi A, in), iv ′′′ (p , out; pi B, in), (Z, out; p′′i , in), (Z, out; p′′′ i , in).
i
The simulation of the rule AB → CD proceeds in a way similar to the simulation of A → a, but this time we have to make sure that two symbols are read in the right places of the input string, otherwise the trap-symbol Z is released from membrane 2. Consider now the length increasing rules pi : A → BC ∈ R. They are simulated by the following rules:
6. MEMBRANE TRANSDUCERS
R1 ((α, read), in)|pi , α ∈ U, ((α, write), out)|pi , α ∈ U, ((α, write), out)|p′i , α ∈ U, ((A, read), in)|p′i , ((α, read), in)|hp′′i ,BC,αi , α ∈ U ((B, write), out)|hp′′i ,BC,αi , α ∈ U ((α3 , read), in)|hp′′′ , i ,α1 α2 ,α3 i α1 , α2 , α3 ∈ U ((α1 , write), out)|hp′′′ , i ,α1 α2 ,α3 i α1 , α2 , α3 ∈ U
77
R2 (f pi , out; c, in), (pi , out; pi , in), (p′i , out; pi , in), (hp′′i , BC, αiBC, out; p′i , in), α ∈ U ′′ (hp′′′ i , Cα1 , α2 i, out; hpi , BC, α1 iA, in) α1 , α2 ∈ U, ′′′ (hpi , α2 α3 , α4 i, out; hp′′′ i , α1 α2 , α3 i, in) α1 , α2 , α3 , α4 ∈ U.
In the presence of p′i we output the previously read symbol, and we read A. Simultaneously, B and C are released from membrane 2, together with the symbol hp′′i , BC, αi, which reminds us of the fact that we have to output B and C in the next two steps, while reading any new symbol α from the tape. After one step, we have to remember that C and α have to be written, and a new symbol can be read, and so on. Indeed, in the presence of hp′′′ i , α1 α2 , α3 i, we can exit α1 and read α3 , simultaneously with changing ′′′ hp′′′ , i α1 α2 , α3 i to hpi , α2 α3 , α4 i. The fact that B, C can be written at the right time is ensured by the fact that we have them in the skin membrane together with the suitable promoters. If any symbol hp′′′ i , α1 α2 , α3 i is released from membrane 2 and the next symbol to be read from the tape is not α3 , then no symbol can be read at the next step, and this is not correct with respect to the definition of the initial way of working, hence the computation is “lost”. Thus, also the rules of the form A → BC are correctly simulated. In this way, all derivations with respect to G can be simulated in Π, one rule in each iteration. In order to remove the auxiliary symbols #, we also consider the following rules: R1 R2 ((#, read), in)|t , (f t, out; c, in), ((a, read), in)|t , a ∈ T, ((a, write), out)|t , a ∈ T. At some iteration, we bring out of membrane 2 the symbol t. In its presence, we can read and write only terminal symbols, hence the computation is correct, in the initial mode, only if the string we read is of the form ##w## with w ∈ T ∗ , and the output is w. This concludes the proof: the intersection with T ∗ will remove all nonterminal strings (e.g., produced at the end of the previous iterations), hence L(G) = T ∗ ∩ Π∗ (L0 ). 6.5. Sevilla Carpets of the P Transducers. Let us start by pointing out the fact that with a P transducer we can associate the language of Sevilla carpets, as proposed in [CP2].
78
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
We remind the reader that the Sevilla carpet associated with a computation in a parallel computing device is a generalization of the control word of a derivation in a Chomsky grammar. In the case of a grammar, we associate with each terminal derivation the string of labels of the rules used in the derivation. In the case of a device where several agents work simultaneously, we have to specify all agents active in each time unit during the computation, hence we get a two-dimensional “string”. The set of all these “carpets” is a two-dimensional language associated with the device. Membrane
3
2
1
Input Output
Rule Iteration 1 Iteration 2 Iteration 3 Iteration 4 6 00000 000000 0000000 0000000 5 00000 000000 0000000 0000000 4 00000 000000 0000000 1000000 3 00010 000010 0000010 0000000 2 01100 011100 0111100 0000000 1 10000 100000 1000000 0000000 4 00000 000000 0000000 0000000 3 00000 000000 0000000 0000000 2 00000 000000 0000000 0000010 1 00000 000000 0000000 0111100 10 00000 000000 0000000 0100000 9 00000 000000 0000000 0000003 8 00000 000000 0000000 0001110 7 00000 000000 0000000 0110000 6 00001 000001 0000001 0000000 5 00000 000010 0000110 0000000 4 01110 011100 0111000 0000000 3 00000 000100 0001100 0000000 2 01100 011000 0110000 0000000 1 10000 100000 1000000 1000000 aaa aaab aaabb aaabbb aaab aaabb aaabbb ew
Figure 6. Sevilla Carpet for the System from Theorem 1.58 In our case, we have at least three possibilities: to consider as elementary agents the membranes and to specify the membranes which are active in each time unit, or to consider the rules from each membrane, with two sub-cases – indicating only if a rule is used or not in each time unit, or specifying the number of applications of each rule in each time unit. In the first two cases we get a rectangle with the pixels marked with 0 (non-active) and 1 (active), in the third case the pixels are marked with natural numbers which can be arbitrarily large. The last case, when specifying the number of applications of each rule in each time unit provides the highest quantity of information;
6. MEMBRANE TRANSDUCERS
79
that is why we illustrate the notion of the Sevilla carpet of this type for the transducer used in the proof of Theorem 1.58. We have three membranes, containing 10, 4, 6 rules, respectively, hence our carpets will have 20 levels. We assume these rules labelled from top down in the arrangement of Figure 5. When starting from a3 and performing 4 iterations, we get a carpet as in Figure 6. The first three iterations are for accumulating the necessary three copies of b, and the last one is for sending out a string ew with |w|a = |w|b = 3. As it is the case with generating/recognizing P systems, the Sevilla carpet associated with a computation in a P transducer can give a series of indications about the time-space complexity, the nondeterminism, the fairness of the computation. Being largely a topic for further investigations, we do not enter here into further details in what concerns the Sevilla carpet of P transducers.
6.6. A Wealth of Research Topics. These P transducers take a string at the input and produce another string at the output. They have the same computational power as Turing machines. Our study is rather preliminary, many natural problems and research topics remain to be considered. Several problems have been already mentioned: the effect of the composition, the properties of the Sevilla carpets, etc. We outline here some further questions, and the reader can formulate many more. First, the rules containing marked symbols (a, read) are used in a sequential manner, which makes the model somewhat hybrid. What about supposing that the string to be translated is available in the environment in the form of the multiset of its symbols, all marked, and the rules are used in the standard maximally parallel manner? In such a case, a string an bm will be introduced in the system in only one step by the rules from Example 3.2. This variant looks rather natural from the point of view of membrane computing (but not so much from the point of view of string automata theory), hence it deserves to be carefully investigated. Then, we have considered systems with permitting conditions (promoters) associated with the rules. What about forbidding conditions? More interesting: is it possible to get rid of these conditions and use only rules which are applied in the free manner? Also, in the case of the iteration we have assumed that the transducers are reset after each iteration. This assumption is rather useful, because it provides further copies of symbols necessary for enlarging the string; for an illustration of this idea see the proof of Theorem 1.60. What about assuming that the transducer is not reset after each iteration? This imposes a drastic restriction on the power of our devices: starting from a singleton, we can produce at most a finite set of strings, and a result such as that from Theorem 1.60 will not be possible. Then, what can we obtain starting from a regular language and iterating a non-reset isolated transducer?
80
1. MEMBRANE SYSTEMS AND THEIR SEMANTICS
We have not paid too much attention to the descriptional complexity issue, to the size of our systems. The most immediate criteria are the number of membranes and the weight of the symport and antiport rules. Do these criteria induce hierarchies of the families of computed mappings? Which is the power of weaker small transducers, those which are expected not to be universal? A rather interesting case is that of deterministic transducers, where each string is translated, whenever it is “recognized” by the transducer, into one single string. We have followed here the style of P automata, with the translated string being introduced into the system symbol by symbol, maybe also allowing the input of several symbols at the same time, but not necessarily all symbols in one step. This could be an interesting case, namely to start the computation with the whole string to be translated, put into the system at the beginning in a well defined manner. For instance, designate some membranes i1 , . . . , ik as input membranes, and introduce the first k symbols into these membranes, in the ordering of the labels, then continue with the next k symbols, and so on until exhausting the string. Does such a parallel-input make any difference in power and properties? Symmetrically, instead of sending the resulting string piece by piece into the environment, it might be of interest to also designate some membranes as output membranes, and read the result from these membranes, in the ordering of their labels.
Bibliography [AC1] O. Agrigoroaiei, G. Ciobanu. Dual P Systems. Lecture Notes in Computer Science vol.5391, 95–107, 2009. [AC2] O. Agrigoroaiei, G. Ciobanu. Rewriting Logic Specification of Membrane Systems with Promoters and Inhibitors. Electronic Notes in Theoretical Computer Science vol.238, 5–22, 2009. [ACL1] O. Andrei, G. Ciobanu, D. Lucanu. Executable Specifications of the P Systems. Lecture Notes in Computer Science vol.3365, 127–146, 2005. [ACL2] O. Andrei, G. Ciobanu, D. Lucanu. A Structural Operational Semantics of the P Systems. Lecture Notes in Computer Science vol.3850, 32–49, 2006. [ACL3] O. Andrei, G. Ciobanu, D. Lucanu. Operational Semantics and Rewriting Logic in Membrane Computing. Electronic Notes of Theoretical Computer Science vol.156, 57–78, 2006. [ACL4] O. Andrei, G. Ciobanu, D. Lucanu. A Rewriting Logic Framework for Operational Semantics of Membrane Systems. Theoretical Computer Science vol.373, 163–181, 2007. [CG] G. Ciobanu, M. Gontineac. P Machines: An Automata Approach to Membrane Computing. Lecture Notes in Computer Science vol.4361, 314–329, 2006. [CL] G. Ciobanu, D. Lucanu. Events, Causality, and Concurrency in Membrane Systems. Lecture Notes in Computer Science vol.4860, 209–227, 2007. [CP1] G. Ciobanu, L. Pan, Gh. P˘ aun, M.J. P´erez-Jim´enez. P Systems with Minimal Parallelism. Theoretical Computer Science vol.378(1), 117–130, 2007. [CP2] G. Ciobanu, Gh. P˘ aun, Gh. S ¸ tef˘ anescu. Sevilla Carpets Associated with P Systems. Proceedings Brainstorming Week on Membrane Computing, Report 26/03, Rovira i Virgili University, 135–140, 2003. [CP3] G.Ciobanu, Gh.P˘ aun, Gh.S ¸ tef˘ anescu. P Transducers. New Generation Computing vol.24, 1–28, 2006. [CPP] G. Ciobanu, Gh. P˘ aun, M.J. P´erez-Jim´enez. Application of Membrane Computing, Springer, 2006.
81
CHAPTER 2
Complexity of Membrane Systems This chapter studies the computational complexity of various classes of membrane systems. We start by studying the computational complexity of simple P systems by considering the allocation of resources enabling the parallel application of the rules. A resource allocator is introduced to study the problem of distributing objects to rules in order to achieve a maximum consuming and nondeterministic evolution of simple P systems. Using the well-known knapsack problem, we prove that the decision version of the resource mapping problem is NP-complete. The evolution of a simple P system is simulated stage by stage by using a pseudo-polynomial algorithm for the knapsack problem. The allocation of resources to rules is an important stage in the nondeterministic and maximum consuming evolution of a simple P system, followed by the parallel application of the selected rules. 1. Computational Complexity of Simple P Systems We use a subclass of transition P systems called simple P systems, where the left side of the rules can contain only a single object (having various multiplicities), and the rules are applied such that the maximum number of objects is consumed. The computational complexity of the allocation of resources for simple P systems is presented, extending an idea presented initially in [CG]. We use a classical combinatorial NP-complete problem, namely the knapsack problem, to show that the allocation of resources for this class is NP-complete. 1.1. Knapsack Problem. The knapsack problem, a classical combinatorial optimization problem, refers to finding a maximum total profit for a given set of items, each with an associated profit and weight, and a global weight limit. The discrete version refers to the fact that we can only include an item as a whole, not just a part of it. Another variant of the knapsack problem is the unbounded version [57], where an item can be included any number of times. Mathematically, the discrete knapsack problem can be formulated as follows: given a bag of capacity c and n items labelled a profit pi and a weight gi , maxP from 1 to n, each item i having P imize ni=1 pi xi , xi ∈ {0, 1}, subject to n1 gi xi ≤ c, where xi = 1 means that we put item i into the bag. The decision version of this problem is known to be NP-complete. However, there exists a pseudo-polynomial time algorithm using dynamic programming with running time of O(n · c). 83
84
2. COMPLEXITY OF MEMBRANE SYSTEMS
The knapsack problem can be expressed as an optimization problem as follows: P • Objective function: max ni=1 pi xi • Restrictions – x Pi n∈ {0, 1}; – 1 gi xi ≤ c. To solve this problem using dynamic programming, it is necessary to define the notion of a state, and the transitions between states. For this problem, a state is defined by the items we take into consideration. Starting with an initial state where there are no items to choose from, we continue to make transitions to the next state until we reach a final state. A transition is represented by the choicePbetween inserting an item in the bag or not. We P denote by fi (X) = max{ j=1,i pj xj | j=1,i gj xj ≤ X} the function which answers the question “Having the allowed weight X, what is the optimal profit obtained by using only the first i items ?”. The index i denotes the number of items taken into consideration; thus a state i is defined by the function fi . The function fi can be computed as follows: , if X < 0 −∞ fi (X) = 0 (1) , if i = 0 ∧ X ≥ 0 max{fi−1 (X), fi−1 (X − gi ) + pi } , otherwise The answer to the knapsack problem is given by the value of fn (c), because we consider all the items and the maximum allowed weight. The functions fi can be stored as a table, and can be computed starting from the initial state and going to the last state. The current state depends only on the previous state, and so it suffices to store only the last two lines in the table. We use the example given in Table 1, considering c = 10. i 1 2 3 gi 3 5 6 pi 10 30 20 Table 1. A Knapsack Instance Table 2 illustrates the recurrence defined by f . X 0 f0 0 f1 0 f2 0 f3 0 Table 2. Values
1 2 0 0 0 0 0 0 0 0 for f
3 4 5 6 0 0 0 0 10 10 10 10 10 10 30 30 10 10 30 30 corresponding to
7 8 0 0 10 10 30 40 30 40 instance
9 10 0 0 10 10 40 40 40 40 defined in Table 1
More details on knapsack problems can be found in [57].
1. COMPUTATIONAL COMPLEXITY OF SIMPLE P SYSTEMS
85
1.2. Simple P Systems. A transition P system is represented by some regions delimited by membranes which contain multisets of objects evolving according to some associated rules. A computation consists of a number of transitions between system configurations, and the result is represented either by the multiset of objects present in the final configuration in a specific membrane, or by the objects which leave the outermost membrane of the system (the skin membrane) during the computation. We represent a multiset as a string over their support alphabet. The union v + w of two multisets over O is given by the sum of multiplicities for each element of O.
Definition 2.1. A transition P system of degree n, n ≥ 1, is a construct Π = (O, T, µ, w1 , . . . , wn , R1 , . . . , Rn ), where (1) (2) (3) (4)
O is an alphabet of objects; T ⊆ O is the output alphabet; µ is a membrane structure of degree n; wi , with i = 1, n = {1, . . . , n}, are strings which represent multisets over O associated with the membrane i of µ; (5) each Ri (1 ≤ i ≤ n) is a finite set of evolution rules over O associated with the membrane i of µ; an evolution rule is a pair (u, v) written as u → v, where u is a non-empty string over O and v = v ′ or v = v ′ δ, where v ′ is a string over {ahere , aout , ainj |a ∈ O, 1 ≤ j ≤ n}; here, out, inj are tags, and δ is a special symbol, all of them not in O.
A membrane structure is a hierarchical arrangement of membranes, hence it can be represented by a tree. A configuration of the system is given by the membrane structure and the multisets contained in each membrane. The initial configuration is the (n + 1)-tuple (µ, w1 , . . . , wn ). Having the possibility to dissolve a membrane, we can obtain a configuration which has only some of the initial membranes. A configuration of Π is defined as a sequence (µ′ , wi′1 , . . . , wi′k ), with µ′ a membrane structure obtained by dissolving from µ all the membranes different from i1 , . . . , ik , with wij multisets over O, 1 ≤ j ≤ k, and {i1 , . . . , ik } ⊆ {1, . . . , n}. Definition 2.2. A simple P system is a maximum consuming noncooperative transition P system where the left side of a rule consists of a single object with a certain multiplicity. A rule u → v ∈ Ri has u = ak , where a ∈ O and k ∈ N. In such a system the rules are applied in parallel such that they consume the maximum number of available objects: if we denote by M the multiset of rules chosen to be applied, meaning that
86
2. COMPLEXITY OF MEMBRANE SYSTEMS
P
k
uj j ⊆ wi , and for any other multiset M of applicable rules n o X ukl l ⊆ wi , M = {ul → vl }kl |ul → vl ∈ Ri ∧ kl ∈ N such that {uj →vj }kj ∈M
we have that
X
{ul →vl }kl ∈M
{ul →vl }kl ∈M
|ul |kl ≤ sum{u
kj j →vj } ∈M
|uj |kj .
1.3. Parallel Application of Rules by Allocation of Resources. The problem of distributing objects to rules in order to achieve a maximum consuming and nondeterministic evolution is studied from a computational complexity point of view. We envision a device capable of distributing the objects to rules, and establish its complexity. This device is called resource allocator, and we abbreviate it by RA. Using Definition 2.2, a maximum consuming evolution step of a simple P system corresponds to a multiset of rules which rewrites the maximum number of objects. Therefore the purpose of the resource allocator is to nondeterministically distribute multisets of objects to the rules of a membrane such that the evolution is then done in a maximum consuming way. Given this setup, the parallel application of rules depends on a RA able to solve an instance of the discrete knapsack problem. The nondeterminism comes from the fact that we can choose different multisets of rules which correspond to the solution of the knapsack problem, and we nondeterministically choose one of them. A resource allocator can be associated to each membrane of a simple P system. Given the parallel evolution of membranes, each resource allocator resolves a particular knapsack instance associated with its membrane, and each one operates independently of the others. A resource allocator can be formally defined as a mapping RA : O∗ → (O∗ )|R|+1 , where O is the alphabet of objects, and R is the set of rules associated with a membrane. Considering w ∈ O∗ as the content of a membrane, w1 , w2 , . . . , w|R| as the multisets allocated to rules (determined by their left sides), and w′ as the multiset of objects that are not consumed, we have (2) |R| X wi + w′ = w} RA(w) = {(w1 , w2 , . . . , w|R| , w′ ) | wi * w′ , i = 1, |R| and i=1
Example 2.3. To clarify the previous definition, we give an example. Suppose that the resource allocator RA has to distribute the multiset a10 to the rules a4 → b, a3 → c and a2 → d. According to our definition, the resources can be distributed in multiple ways. We show only a few: a10 ⇒ 2 · a4 + 0 · a3 + 1 · a2
a10 ⇒ 1 · a4 + 2 · a3 + 0 · a2
a10 ⇒ 0 · a4 + 2 · a3 + 2 · a2
1. COMPUTATIONAL COMPLEXITY OF SIMPLE P SYSTEMS
87
This example illustrates the fact that maximizing the number of consumed objects is different than applying the rules in a maximally parallel way; for instance, we cannot distribute a10 as 3 · a3 which is maximal parallel, but it is not maximum consuming. The resource mapping problem (shortly RMP) can be formulated as follows: given a resource allocator RA for a membrane of a simple P system, construct a multiset which is maximum consuming according to Definition 2.2. Thus RMP can be viewed as an implementation of a resource allocator RA. Definition 2.4. Formally, an instance of RMP is given by (i, wi , Ri ) where: (1) i is a membrane of µ, where µ is a membrane structure of a simple P system, (2) wi is the multiset of objects associated with membrane i, (3) Ri is the set of rules associated with membrane i, such that for o n M = {ul → vl }kl | ul → vl ∈ Ri , and X ukl l ⊆ wi {ul →vl }kl ∈M
we obtain a maximum
X
{ul →vl }kl ∈M
|ul | kl .
We investigate the computational complexity of this problem. We show that the decision version of the resource mapping problem in simple P systems is NP-complete. Without loss of generality, we consider the decision version of RMP where we have as input a single membrane of a simple P system with a resource allocator, and a number T which represents the number of objects we intend to rewrite; we ask whether there exists a multiset of rules which consumes at least that number T of objects. Formally,nusing the framework of Definition 2.4, we ask whether there exists o P k kj M = {uj → vj } | uj → vj ∈ Ri such that {u →v }kj ∈M uj j ⊆ wi and j j P |uj |kj ≥ T , where |uj | denotes the multiplicity of the object {uj →vj }kj ∈M uj . We use a classical approach, showing first that RMP is in NP, and then that the knapsack problem can be polynomial-time reduced to RMP. For the latter, we consider a version of the knapsack problem KNAP where the weight of an item is equal to its profit and we can choose an item multiple times. Definition 2.5. A decision problem X is a pair (IX ,YX ) where IX is the set of instances, and YX is the subset of positive instances. Given two decision problems A and B, we say that A is polynomial-time reducible to B, denoted by A ≤m B, if and only if there exists a function f : IA → IB
88
2. COMPLEXITY OF MEMBRANE SYSTEMS
such that f can be computed in polynomial time (with respect to the size of its input) by a deterministic Turing machine, and a ∈ YA if and only if f (a) ∈ YB for all instances a ∈ IA . Theorem 2.6. The decision version of RMP is NP-complete. Proof. In the following we denote by RMP the decision version of the resource mapping problem. To prove that RMP is NP-complete we show that: • RMP ∈ NP; • KNAP ≤m RMP. To show that RMP ∈ NP, we design a nondeterministic Turing machine which solves the problem in polynomial time by either accepting (output YES) or rejecting (output NO) the input string. The machine has as input string an encoding of the concatenation of the following components: a membrane of a simple P system, the rules of this membrane, the available multiset of objects called the input multiset, and the input number T which represents the number of objects to rewrite. The machine constructs incrementally a multiset of rules as follows: (1) it starts with an empty multiset; (2) if it can apply some rules, it chooses and applies one rule nondeterministically and adds it to the multiset of applied rules; (3) if it cannot apply any rule, it counts the number of rewritten objects (from the computed multiset of rules) and compares it to the input number T ; (4) if the new computed number is greater or equal than T , then the machine accepts the input string (output YES), otherwise it rejects it (output NO). The number of steps performed by the machine is polynomial with respect to the input size of the multiset of objects in the membrane. At each step we apply a rule, and so the input multiset decreases by at least one object. It follows that the number of performed steps is at most the cardinal of the input multiset. We now show that KNAP ≤m RMP. To achieve this we define a function f which transforms an instance of KNAP into an instance of RMP, and show that f can be computed in polynomial time. The input for f is the number of items n, the capacity c of the knapsack, the weights gi of each item i, their profits pi = gi , and the profit value T to be obtained. The output of f consists of the membrane i, a set of rules Ri , the input multiset wi , and the number T of rewritten objects. Let us consider G = {g1 , . . . , gn } the set of items weights. Using the above notations, the transformation can
1. COMPUTATIONAL COMPLEXITY OF SIMPLE P SYSTEMS
89
be defined as follows: i is a simple membrane, wi = ac , a ∈ O, Ri = agi → b | a, b ∈ O, gi ∈ G ∀i = 1, n , T = T.
(3)
It follows from equations (3) that f can be computed in polynomial time. Using Definition 2.5 and the equations of (3), we show that for each instance a of KNAP, a ∈ YKNAP if and only if f (a) ∈ YRMP . For the first part of the implication we start with a ∈ YKNAP . This means that there exists a possibly better solution to this instance, meaning that there exists a subset of items whose weight does not exceed c and whose profit is at least T . We denote it by K = {k1 , . . . , kn }, where ki represents the number i has been Pn of times item Pnincluded in the knapsack. ′ ≥ T and Thus we have that k p = T i=1 i io i=1 ki gi ≤ c. We construct n
M = {agi → b}ki | i = 1, n . We can apply ki times the rule agi → b because we have inserted ki times the item i in the knapsack. We define ki = 0 whenever we did not apply a rule agi → b, and so we can sum by i = 1, n. We now have X
{agi →b}∈M
|agi | ki =
X
gi ki =
{agi →b}∈M
n X
gi ki =
i=1
n X i=1
pi ki = T ′ ≥ T
This implies that f (a) ∈ YRMP . For the second part of the implication we consider f (a) ∈ YRMP . This means that there exists a membrane i containing a multiset wi of objects and a set Ri of rules that, applied in the maximum consuming way, it consumes at least the multiset of applied rules by M ; thus o n T objects. We denote P M = {agi → b}ki | ki ∈ N and {agi →b}ki ∈M |agi | ki ≥ T . It follows that X
{agi →b}ki ∈M
gi
|a | ki =
X
{agi →b}ki ∈M
gi ki =
n X i=1
gi ki =
n X i=1
pi ki ≥ T
This implies that a ∈ YKNAP , and so the proof is completed.
1.4. Complexity of Simple P Systems. We now extend the approach from [CG], where the authors suggest that a P system can evolve using an oracle which solves the resource allocation problem for a membrane of the system. We avoid the oracle by using a resource allocator which ensures a maximum consuming and nondeterministic evolution. The evolution of a simple P system is given by the application of a set of rules that consume the maximum number of objects; this depends on the resource allocator RA being able to solve an instance of the discrete knapsack problem and retrieve the solutions.
90
2. COMPLEXITY OF MEMBRANE SYSTEMS
Each evolution step of a simple P system is composed of three stages: the first stage consists of the assignment of objects to rules according to the resource allocator, the second stage represents the distribution of the results obtained after applying the selected rules, and the final stage consists of the dissolution of certain membranes. We create an instance of the discrete knapsack problem based on the multiset of objects, then we solve this instance obtaining the corresponding rules. After applying the rules, the resulting objects are moved according to their tags here, out, and inj . The membranes containing the special symbol δ are dissolved, and their contents are transferred to their parents (the δ symbols are not transferred). For the first stage of the process we present a function that transforms an instance of the resource allocation problem into a knapsack instance such that we can obtain a solution of the RMP instance by solving the transformed instance. Given a membrane of a simple P system, we define the capacity c of the knapsack, the number n of items, the weight gi and profit pi for each item i, as follows: (4) c = |w|;
gik = k · h, where R = {r1 , . . . , rm }, ri = uhi → vi , ui ∈ O ∧
k = 1, m where m = max{j ∈ N | w′ ⊆ w ∧ w′ = j · ui }; i = 1, |R|;
n = |{gik }| ; pik = gik .
The transformation f is defined by (4). In the knapsack problem, an item can be used only once; however in membrane systems, a rule can be applied several times. Thus for every rule we define a “class” of items such that each item represents a rule application. We denote by W such a set of items; thus we have |W | ≤ c · |R|, because a rule cannot be applied more than c times, and there exist |R| rules. Since we are interested in consuming as many objects as possible, the profit of an item is defined as the number of objects it consumes. The transformation f can be computed in polynomial time with respect to |w|. In the first stage we transform an instance of the resource allocation problem to an instance of the knapsack problem by using the function f . Then we solve the new created instance, and obtain the rules which can be applied in parallel, together with the multiplicity of each rule. We can express the computational complexity of each stage with respect to the input represented by the multiset of objects w of the membrane. For the first stage we can define a relation between the number of the created items and the size of the input multiset. According to equations (4), |w| corresponding items in each rule ui → vi can introduce a maximum of |uj |
1. COMPUTATIONAL COMPLEXITY OF SIMPLE P SYSTEMS
91
the knapsack instance. Summing these quantities, for each rule we have that n≤ Note that
|R|
X i=1
|R| |R| X X 1 |w| = |w| |ui | |ui | i=1
i=1
1 is a constant associated to the membrane, because the |ui |
rules of a membrane do not change in the process of evolution. We denote this constant with S, and so n ≤ |w| · S. Thus the complexity of this stage is O(n) = O(|w| · S). For the second stage we use a pseudo-polynomial algorithm for the knapsack problem having a complexity of O(n·c), where n is the number of items in the knapsack instance, and c is the capacity of the knapsack. By using the estimated number of items expressed in the first stage and the fact that c = |w|, the complexity of the second stage is O(|w| · S · |w|) = O(|w|2 · S). The third stage consists of applying the rules according to the solution of the knapsack problem. It is worth to note that we can have different rules which give the same profit. The pseudo-polynomial algorithm for knapsack can retrieve the selected items in O(n) steps, and so the complexity of this step is O(|w| · S). During the backtracking process used for obtaining the selected items we nondeterministically choose a rule corresponding to each item. Thus we obtain a multiset of rules which corresponds to a maximum consuming evolution, and the evolution is nondeterministic because of the way the rules are chosen. Thus a complexity of an evolution step for a membrane of a simple P system is O(|w| · S) + O(|w|2 · S) + O(|w| · S) = O(|w|2 · S). from Example 2.3, we illustrate how to distribute a10 as n Starting o 2 a3 → c , a4 → b . According to the equations (4), we obtain the knapsack instance with c = 10, n = 7, and the items weights and profits indicated in Table 3. i 1 2 3 4 5 6 7 gi 2 3 4 6 8 9 10 pi 2 3 4 6 8 9 10 Table 3. The created items of the knapsack obtained by applying the transformation function By applying the knapsack algorithm we get the results described in Table 4. Using the recurrence relation defined by equation (1) we obtain the items in the solution of the knapsack problem. At each step i, i = 1, n, we test whether the item n − i + 1 is included in the knapsack. We start the backtracking process with the maximum allowed weight 10. To test if item 7
92
2. COMPLEXITY OF MEMBRANE SYSTEMS
i 7 6 5 4 3 2 1 X 10 10 10 10 4 0 0 fi−1 (X) 10 10 10 9 3 0 0 fi−1 (X-gi )+pi 10 9 10 6 4 −∞ −∞ max f6 (10) f5 (10) f4 (10) f3 (4)+6 f2 (0)+4 f1 (0) f0 (0) Table 4. The backtracking process used for obtaining the selected items
is included in the knapsack, we compute the maximum of f6 (10) = 10 and f6 (10 − 10) + 10 = 10. We (nondeterministically) choose f6 (10), which corresponds to the fact that we do not choose item 7. Continuing with item 6, we chose between f5 (10) = 10 and f5 (10 − 9) + 9 = 9. According to this we choose f5 (10) and do not include item 6 in the knapsack. At the next step we compare f4 (10) = 10 and f4 (10 − 8) + 8 = 10. Again we choose nondeterministically the value f4 (10). At step 4 we compare f3 (10) = 9 and f3 (10 − 6) + 6 = 10 and choose the latter. At this step, the remaining weight becomes 10 − 6 = 4 because we have chosen to include item 4 with weight and profit 6. To decide if item 3 is included, we compare f2 (4) = 3 and f2 (4 − 4) + 4 = 4. We choose f2 (0) + 4 which means that item 3 is included in the knapsack. The remaining weight becomes 4 − 4 = 0. At the following step we find the maximum between f1 (0) = 0 and f1 (0 − 3) + 3 = −∞; we do not include item 2 in the knapsack. In the last step we determine if item 1 is included in the knapsack by finding the maximum between f0 (0) = 0 and f0 (0 − 2) + 2 = −∞. According to this we do not include item 1 in the knapsack. This process is illustrated in Table 4, where X represents the remaining weight in the knapsack, fi−1 (X) and fi−1 (X − gi ) + pi represent the alternatives between including or not the current item i, and max represent the chosen alternative. The recurrence is also illustrated in Table 5, where the value chosen at step i is highlighted with a box subscripted by the step number. The solution is given by items 3 and 4 which have total weight 10 and total profit 10. The multisets of rules associated to each item can be determined by using equations (4). In this way, item 3 is defined by the following possible multisets of rules n 2 o n 4 o a2 → d , a →b Similarly, item 4 corresponds to the following multisets of rules n 3 o n 2 o n 3 2 o a2 → d , a → d , a4 → b , a →c
The algorithm chooses nondeterministically a single multiset for each item of the solution. Let us assume that the algorithm outputs the following multiset: n 2 o a3 → c , a4 → b
2. COMPLEXITY OF EVOLUTION IN MAXIMUM COOPERATIVE P SYSTEMS
X 0 1 f0 0 7 0 f1 0 6 0 f2 0 5 0 f3 0 0 0 0 f4 f5 0 0 0 0 f6 f7 0 0 Table 5. Finding the
2 3 4 5 0 0 0 0 2 2 2 2 2 3 3 5 2 3 4 4 5 2 3 4 5 2 3 4 5 2 3 4 5 2 3 4 5 Solution to the
6 7 8 0 0 0 2 2 2 5 5 5 6 7 7 6 7 8 6 7 8 6 7 8 6 7 8 Knapsack
93
9 10 0 0 2 2 5 5 9 9 9 10 3 9 10 2 9 10 1 9 10 0 Instance of Table 3
The recurrence described in Table 5 and the nondeterministic choice of multisets of rules associated with the selected items determines that we can apply two times the rule a3 → c and one time the rule a4 → b in order to consume all the resources a10 . 2. Complexity of Evolution in Maximum Cooperative P Systems We introduce a variant of P systems called maximum cooperative P systems ; it consists of transition P systems with cooperative rules that evolve at each step by consuming the maximum number of objects. The problem of distributing objects to rules in order to achieve a maximum consuming evolution is studied by introducing the resource mapping problem. The decision version of this optimization problem is proved to be NP-complete. We describe a new simulation technique for the evolution of the maximum cooperative P systems based on integer linear programming, and illustrate the evolution by an example. 2.1. Integer Linear Programming. Many problems of both practical and theoretical importance deal with finding the “best” solution given a set of constraints. One such type of problems, called integer linear programming (ILP) problem, involves the optimization of a linear objective function subject to linear equality and inequality constraints: Find x ∈ Zn to maximize cT x subject to Ax ≤ b (5) where A is an m × n matrix with elements aij ∈ Z , m, n ∈ N, b ∈ Zm , and c ∈ Zn
ILP is a version of linear programming (LP) problem. Using the previous definition, an LP is defined exactly as an ILP, except that the constraint x ∈ Zn is replaced with x ∈ Qn (i.e. the solution is composed of rational numbers). Linear programming can be applied to various fields of study, from economics to engineering. Most extensively it is used in modelling diverse types
94
2. COMPLEXITY OF MEMBRANE SYSTEMS
of problems in planning, routing, scheduling, and assignment. The decision version of ILP is an NP-complete problem [44, 86]. We use the same definition for the size of an ILP instance as in [86]. The size L of an ILP instance can be expressed as L = mn + log |P |, where P is the product of the nonzero entries of A, b, and c. Algorithms for solving an ILP instance are divided in two classes which overlap to some extent: the cutting-plane algorithms, and the enumerative ones. The latter are based on intelligent enumeration of possible solutions; branch-and-bound and branchand-cut are two algorithms from this class. The cutting-plane algorithms incrementally add constraints that do not exclude integer feasible points until we obtain an integer solution. There are several algorithms used for solving LP instances, notably: the simplex algorithm [35] introduced by Dantzig which can be exponential, the ellipsoid method [60] introduced by Khachiyan with a time complexity of O(n4 L), and the interior point method [56] introduced by Karmarkar with a time complexity of O(n3.5 L), refined by Anstreicher in [5] with a time complexity of O((n3 / ln n) L). More information on the integer linear programming could be found in [86]. Here we focus only on positive instances of ILP (i.e. b ∈ Nm , c ∈ Nn , x ∈ Nn , and A = aij ∈ N for all i = 1, n and j = 1, m). We do this because the ILP solution corresponds to the number of times each membrane rule is applied, and a rule cannot be applied a negative number of times. 2.2. Maximum Cooperative P Systems. We represent multisets as strings of elements over their support alphabet together with their multiplicities (e.g., u = a3 b c2 ). The union v +w of two multisets over a set O is given by the sum of multiplicities for each element of O. uk is an abbreviation for Pk 3 9 3 6 i=1 u, where u is a multiset and k ∈ N (e.g., u = a b c ). A membrane structure is a hierarchical arrangement of membranes, and it can be represented by a rooted tree. For a membrane structure composed of n membranes, each membrane is labelled with a unique number from {1, · · · , n}. The applicability of a rule does not depend only on the left-hand side of it, because we can have rules where the right-hand side contains a symbol with the tag inj , and there is no child membrane j. We select the applicable rules of a membrane i of a membrane structure µ using f ilter(µ, i) defined by {uij → vij | uij ⊆ wi , and for all aink ∈ vj there is a child k of membrane i} From now on, we refer to the applicable rules for a membrane i by Ri = f ilter(µ, i). A configuration of the system is given by the membrane structure and the multisets contained in each membrane. The initial configuration is the (n + 1)-tuple (µ, 1 : w1 , . . . , n : wn ), where i : wi represents the fact that membrane i contains the multiset wi . The membrane structure associated
2. COMPLEXITY OF EVOLUTION IN MAXIMUM COOPERATIVE P SYSTEMS
95
to any configuration must have the same root, the skin membrane. We adopt the convention that the skin membrane is labelled by 1. Having the possibility to dissolve a membrane, we can obtain a new configuration which has only some of the initial membranes. A configuration of Π is defined as a sequence (µ′ , 1 : w1 , i1 : wi′1 , . . . , ik : wi′k ), where µ′ is a membrane structure obtained from µ by dissolving all the membranes different from i1 , . . . , ik , and wi′j are strings over O, 1 ≤ j ≤ k, {i1 , . . . , ik } ⊆ {1, . . . , n}. Definition 2.7. A maximum cooperative P system is a maximum consuming transition P system. In such a system, cooperative rules are applied in parallel such that they consume the maximum number of available objects. Let (µ, 1 : w1 , l1 : wl1 , . . . , lk : wlk ) be an arbitrary configuration of the system, i a membrane from µ, and H i the multiset of rules chosen to be applied in membrane i, meaning that H i = {{uij → vij }kij | uij → vij ∈ Ri P|Ri | kij ∧ kij ∈ N, j = 1, |Ri |} and j=1 uij ⊆ wi . Then for any other multiset H i of applicable rules such that
|Ri | n o X H i = {uil → vil }kil |uil → vil ∈ Ri ∧ kil ∈ N, l = 1, |Ri | and ukilil ⊆ wi , l=1
we have
|Ri |
X l=1
|uil |kil ≤
|Ri |
X j=1
|uij |kij , where |uij | denotes the length of uij .
Example 2.8. To clarify the previous definition, we give an example. Suppose that we have a membrane which contains b10 c d6 f , and the rules b → ce, cd → be, cf → f δ, and df → bc. According to Definition 2.7, the resources can be distributed in multiple ways. A possible solu tion is b10 c d6 f allocated as {b → ce}10 , {cd → be}, {df → bc} which consumes 14 objects. This example illustrates the fact that maximizing the number of consumed objects is different from applying the rules in a maximally 10 6 10 parallel way, because we could allocate 6 b c d f as {b → ce} , {cf → f δ} consuming only 12 objects, leaving d unconsumed.
2.3. Parallel Application of Rules by Allocation of Resources. The problem of distributing objects to rules in order to achieve a maximum consuming evolution is studied here from a computational complexity point of view. Using Definition 2.7, a maximum consuming evolution step of a maximum cooperative P system corresponds to a multiset of rules which rewrites the maximum number of objects. A resource allocator (shortly RA) is a device capable of distributing objects to rules. The purpose of the resource allocator is to nondeterministically distribute multisets of objects to the rules of a membrane such that the evolution is then done in a maximum consuming way. A resource allocator can be defined for each membrane of a maximum cooperative P system. A resource allocator for a membrane can be formally defined as a mapping RA : O∗ → (O∗ )n+1 , where O is the alphabet of objects, and R = {uj → vj | j = 1, n} is the set of applicable
96
2. COMPLEXITY OF MEMBRANE SYSTEMS
rules contained in the membrane. Considering w ∈ O∗ as the membrane contents, and w′ the multiset of objects that are not consumed, we have RA(w) = {(w1 , w2 , . . . , wn , w′ ) | wj * w′ ∧ wj = uj kj , kj ∈ N, j = 1, n (6)
and
n X
wj + w′ = w}
j=1
To study the computational complexity of the evolution in maximum cooperative P systems, we define the resource mapping problem (shortly RMP): given a membrane of a configuration in a maximum cooperative P system, construct a multiset which is maximum consuming according to Definition 2.7. RMP is used to compute the distribution of objects to rules in order to achieve a maximum consuming evolution. Definition 2.9. An instance of RMP is given by (i, wi , Ri ) where: (1) i is a membrane of µ, where µ is a membrane structure, (2) wi is the multiset of objects in membrane i, (3) Ri is the set of applicable rules within membrane i, such that for |Ri | o n X kil i ′ ukilil ⊆ wi H = {uil → vil } |uil → vil ∈ Ri , kil ∈ N, l = 1, |Ri | and l=1
we obtain a maximum value for
|Ri | X l=1
|uil | kil .
The resource mapping problem (RMP) is similar to that defined and studied for simple P systems; the only difference consists in the form of the membrane rules. More precisely, in simple P systems the left-hand side of the rules consists only of single object (e.g., a2 → b3 c), whereas here the left-hand side of the rules can contain several objects (e.g., ab → ac2 d). We denote by RMPs the version associated with simple P systems, and by RMPc the version associated with the maximum cooperative P systems presented here. We investigate the computational complexity of RMPc . We show that the decision version of the resource mapping problem in maximum cooperative P systems is NP-complete. Without loss of generality, we consider the decision version of RMPc where we have as input a single membrane together with a number T representing the number of objects we intend to rewrite, and ask whether there exists a multiset of rules which consumes at least T objects. Formally, using the framework of Definition 2.9, we ask whether there exists a multiset of rules
2. COMPLEXITY OF EVOLUTION IN MAXIMUM COOPERATIVE P SYSTEMS
97
n o H i = {uij → vij }kij | uij → vij ∈ Ri ∧ kij ∈ N, j = 1, |Ri | such that
|Ri | X j=1
k uijij
⊆ wi and
|Ri | X j=1
|uij | kij ≥ T .
Theorem 2.10. The decision version of RMPc is NP-complete. Proof. The proof consists of two parts: first we show that RMPc is in NP, and then we show that RMPc is NP-hard. To show that RMPc ∈ NP, we design a nondeterministic Turing machine which solves the problem in polynomial time. The machine has as input string an encoding of the concatenation of the following components: a membrane of a maximum cooperative P system, the rules of this membrane, the available multiset of objects called the input multiset, and the input number T which represents the number of objects to rewrite. The machine constructs incrementally a multiset of rules using the following steps: (1) start with an empty multiset of applicable rules; (2) while some rules can be applied, nondeterministically choose and apply one, and then add it to the multiset of applied rules; (3) if no rule can be applied, count the number of rewritten objects (from the computed multiset of rules), and compare it to the input number T ; (4) if the number of rewritten objects is greater or equal to T , then accept the input string and output YES, otherwise reject it and output NO. It is worth to note that during step 2 we only consume objects that correspond to the left-hand side of the chosen rules. The machine always halts because the number of rule applications is finite. This follows from the fact that at each iteration of step 2, the input multiset of objects available for rule application decreases by at least one object. Thus, after a finite number of nondeterministic choices performed in step 2, the machine evolves to steps 3 and 4, outputting the answer. It follows that the number of steps performed is polynomial with respect to the input size, namely that of the available objects multiset and the number of rules in the membrane. To show that RMPc is NP-hard we use the fact that it contains RMPs as a subproblem. In [CR1] we show that RMPs is NP-complete by reducing the knapsack problem to it. It is easy to see that every instance of RMPs is an instance of RMPc , because the rules from simple P systems are included in those of maximum cooperative P systems. This can be viewed as a reduction where the identity function transforms an instance of RMPs into an RMPc instance. Using this in conjunction with the fact that RMPs is NP-complete proves that RMPc is NP-hard. We outline the proof that RMPs is NP-complete. First we show that RMPs ∈ NP by constructing a nondeterministic Turing machine, and then
98
2. COMPLEXITY OF MEMBRANE SYSTEMS
that RMPs is NP-hard by polynomially reducing a version of the knapsack problem (KNAP) to it. To achieve this we define a function f which transforms an instance of KNAP into an instance of RMPs , and show that f can be computed in polynomial time. The input for f is the number of items n, the capacity of the knapsack c, the weights gi of each item i, their profits pi = gi , and the profit value T to be obtained. This particular version of knapsack is known as subset sum, and it is also NP-complete. The output of f consists of a simple P system with a single membrane i having the set R of rules, the input multiset w, and the number T of objects to be consumed (by rewriting). Let us consider G = {g1 , . . . , gn } the set of items weights. To avoid any confusion, we denote by Tk the profit value to be obtained from the KNAP instance, and by Tr the number of objects to be consumed from the RMPs instance. Using the above notations, the transformation has as output the system consisting of: (7)
w = ac , a ∈ O, R = agi → b | a, b ∈ O, gi ∈ G for all i = 1, n , Tr = Tk .
From these equations it follows that f can be computed in polynomial time. We then show that for each instance a of KNAP, a ∈ YKNAP if and only if f (a) ∈ YRMP , where YA denotes the set of positive instances for a decision problem A. 2.4. Evolution Complexity of Maximum Cooperative P Systems. The evolution of a maximum cooperative P system involves the resource mapping problem, an optimization problem proved to be NPcomplete by Theorem 2.10. Integer linear programming is used to simulate the evolution of a maximum cooperative P system, and then the computational complexity of the evolution process is presented. Each evolution step is composed of three stages: the assignment of objects to rules according to the resource allocator, the distribution of the results obtained after applying the selected rules, and the dissolution of certain membranes. This can be described by using the integer linear programming, because it can express the dependencies between the objects from the left-hand side of the rules and those available. Thus, each evolution step is simulated as follows. We create an instance of ILP based on the multiset of available objects, and then we solve it, obtaining the corresponding rules. After applying the rules, the resulting objects are moved according to their tags (here, out, and inj ). The membranes containing the special symbol δ are dissolved, and their contents are transferred to their parents (the δ symbols are not transferred). For the first stage of the process we present a function that transforms an instance of the resource allocation problem into an integer linear programming instance such that a solution of the RMPc instance is obtained by solving the ILP instance.
2. COMPLEXITY OF EVOLUTION IN MAXIMUM COOPERATIVE P SYSTEMS
99
Given a membrane of a maximum cooperative P system with the available object multiset w and the rule set R, we construct a corresponding ILP instance. We consider the objects alphabet O = {o1 , o2 , . . .}, and w(o) represents the multiplicity of object o from a multiset w. For the ILP instance, we define the number n of variables, the number m of constraints, a m × n matrix A of positive coefficients together with a vector b ∈ Zm representing the constraints, and the coefficients c ∈ Zn of the objective function to maximize: (8) n = |R|,
m = |{o ∈ w | w(o) > 0}|, where w = n
X
uj →vj ∈R
c ∈ Z , cj = |uj |, uj → vj ∈ R for all j = 1, n,
uj ∩ w,
b ∈ Zm , bi = w(oi ) for all i = 1, m,
A = (aij ), aij = uj (oi ), where R = {uj → vj | j = 1, n} ∧ oi ∈ O, ∀i = 1, m. The transformation f is defined by (8). Each variable from the ILP instance corresponds to a certain rule (i.e., xj corresponds to uj → vj ∈ R). The solution x ∈ Zn indicates the number of times each rule is applied. The constraints define how many available objects can be consumed. A constraint is defined whenever a rule uj → vj involves the consumption of an available object o (i.e., uj (o) > 0 ∧ w(o) > 0). Hence the number of constraints is equal to the number of distinct objects that can be consumed by applicable rules. This number is equal to the number of distinct objects of the multiset obtained from the intersection between the sum multiset of the rules left-hand side with the multiset P of available objects. The constraint associated with object oj is of the form ni=1 aij ≤ bj for all j = 1, m, indicating the fact that we cannot consume more objects oj than are available. The transformation f can be computed in polynomial time with respect to |w| and |R|. The number n of variables is computed in O(|R|), the number X m of constraints is computed in O |uj | + O(|w|) = O(|w|), c is uj →vj ∈R
computed in O(|R|), b is computed in O(m), and A is computed in O(nm) = O(|R| · |w|). Thus f can be computed in O(|R|) + O(|w|) + O(|R| · |w|). According to [86], the size L of an ILP instance is nm+log |P |. Knowing that n = |R|, m = O(|w|), and |P | = O((nm)2 ), we have L = O(|R| · |w|) + O(log(|R| · |w|)).
In the first stage, an instance of the resource mapping problem is transformed into an instance of the integer linear programming problem by using the function f . Thus the complexity of this stage is the complexity of f ,
100
2. COMPLEXITY OF MEMBRANE SYSTEMS
namely O(|R|) + O(|w|) + O(|R| · |w|). The ILP instance is solved, obtaining the rules and their multiplicity which can be applied in parallel. The complexity of the second stage is the complexity of the algorithm used to solve the ILP instance. To this date, all known algorithms for solving ILP are exponential in the size of the input; branch-and-bound and cutting plane are two such algorithms. An upper bound for the solution of ILP is presented in [86]; if an instance of ILP has a finite optimum, then it has an optimal solution x such that |xj | ≤ n3 [(m + 2)a3 ]4m+12 = M , where a3 = max{a1 , a2 } ∪ {|cj | | j = 1, m}, a1 = max{|aij |}, and a2 = max{|bi |} for i,j
i
all i = 1, n, j = 1, m. Thus the logarithm of the solution size is polynomial in the size of the input log M = 3 log n + (4m + 12)[log(m + 2) + log a3 ] = O(L2 ) = = O((|R| · |w| + log(|R| · |w|))2 ) =
= O((|R| · |w|)2 )
The third stage consists of applying the rules according to the solution of the ILP instance. The solution to ILP is the vector x ∈ Zn , where xj represents the number of times we apply the rule uj → vj ∈ R, for all j = 1, n. It follows that the complexity of this step is O(|R|). The complexity of an evolution step for a membrane of a maximum cooperative P system is the sum of the complexities of the three stages. We illustrate the distribution of the multiset b10 c d6 f to the rules b → ce, cd → be, cf → f δ, df → bc as in Example 2.8. According to (8), we obtain the following ILP instance: n = 4, m = 4 c = (1, 2, 2, 2) (9)
b = (10, 1, 6, 1) 1 0 0 0 1 1 A= 0 1 0 0 0 1
0 0 1 1
This instance can be expressed as maximizing x1 + 2x2 + 2x3 + 2x4 , subject to the following constraints: x1 ≤ 10 x2 + x3 ≤ 1 x2 + x4 ≤ 6 x3 + x4 ≤ 1 x1 , x2 , x3 , x4 ∈ N
The constraints express the fact that we can consume only the available objects. For example, the constraint x2 + x4 ≤ 6 means that we cannot consume more than 6 objects d. x2 and x4 are used in this constraint because
3. EVOLVING BY MAXIMIZING THE NUMBER OF RULES
101
they represent the only rules consuming the object d (x2 corresponds to cd → be, and x4 corresponds to df → bc). The coefficient cj associated with xj represents the number of objects we rewrite by applying the rule that corresponds to xj . Thus for rule cd → be we have c2 = 2 because we rewrite two objects. By applying the algorithm for ILP we obtain the solution x = (10, 1, 0, 1). This means that we apply ten times b → ce, one time cd → be, and one time df → bc, for a total of cT x = 10 · 1 + 1 · 2 + 0 · 2 + 1 · 2 = 14 consumed objects. 3. Evolving by Maximizing the Number of Rules We also present the complexity of finding the multiset of rules in a P system in such a way to have a maximal number of rules applied. It is proved that the decision version of this problem is NP-complete. We study a number of subproblems obtained by considering that a rule can be applied at most once, and by considering the number of objects in the alphabet of the membrane as being fixed. When considering P systems with simple rules, the corresponding decision problem is in P. When considering P systems having only two types of objects, and P systems in which a rule is applied at most once, their corresponding decision problems are NP-complete. We compare these results with those obtained for maxO evolution. The most investigated way of using the rules in a P system is the maximal parallelism: in each membrane a multiset of rules is chosen to be applied to the objects from that membrane; the multiset is maximal in the sense of inclusion, i.e., no further rule can be added such that the enlarged multiset is still applicable. We use “maxP ” to refer to this evolution strategy. Another natural idea is to apply the rules in such a way to have a maximal number of objects consumed in each membrane; this manner of evolution is denoted by “maxO”. This strategy was explicitly considered in [CR1, CR2], where it is proved that the problem of finding a multiset of rules which consumes a maximal number of objects is NP-complete. A third idea is to apply the rules in such a way to have a maximal number of rules applied. We denote this type of evolution by “maxR”. Note that any evolution of either type maxR or type maxO is also of type maxP . Maximizing the number of objects or the number of rules can be related to the idea of energy for controlling the evolutions of P systems. The computing power of these strategies of applying a multiset of rules in membranes is studied in [CMP]. Specifically, P systems having multiset rewriting rules (with cooperative rules), symport/antiport rules, and active membranes are considered. The universality of the system is proved for any combination of type of system and type of evolution. Two variants of membrane systems called simple P systems and maximum cooperative P systems are considered in two previous sections. These systems evolve at each step by consuming the maximum number of objects.
102
2. COMPLEXITY OF MEMBRANE SYSTEMS
Here we study the complexity of finding a multiset of rules which evolves using the maxR strategy. We study a number of subproblems obtained by considering the number of objects in the alphabet of the membrane as being fixed, and by considering that a rule can be applied at most once. We prove that the decision version of this problem is NP-complete. However, in contrast to the results for maxO strategy, the problem for P systems with simple rules is in P. 3.1. maxR Complexity. We represent multisets as strings of elements over their support alphabet together with their multiplicities (for example w = a2 b5 c is a multiset over {a, b, c, d}). The union v + w of two multisets over a set O is given by the sum of multiplicities for each element of O. We define w(a) ∈ N to be the multiplicity of a in w. We say that w ≤ w′ if w(a) ≤ w′ (a) for each element a of the multiset w. In this case we define w′ − w to be the multiset obtained by subtracting the multiplicity in w of an element from its multiplicity in w′ . We use the notation i = 1, n to denote i ∈ {1, . . . , n}. For a rule r = u → v we use the notations lhs(r) = u and rhs(r) = v. These notations are extended naturally to multisets of rules: given a multiset of rules R, the left hand side of the multiset lhs(R) is obtained by adding the left hand sides of the rules in the multiset, considered with their multiplicities. A configuration of the system is given by the membrane structure and the multisets contained in each membrane. We define the three evolution strategies as follows: Definition 2.11. For i = 1, n, a multiset R of rules over Ri is applicable (in membrane i) with respect to the multiset wi if lhs(R) ≤ wi and for each message (a, inj ) present in rhs(R) we have that j is one of the children of membrane i. A multiset R of rules over Ri which is applicable with respect to the multiset wi is called: • maxP -applicable with respect to wi if there is no rule r in Ri such that R + r is applicable with respect to wi ; • maxO-applicable with respect to wi if for any other multiset R′ of rules which is applicable with respect to wi we have that X X lhs(R)(a) ≥ lhs(R′ )(a); a∈O
a∈O
• maxR-applicable with respect to wi if for any other multiset R′ of rules which is applicable with respect to wi we have that X X R′ (r). R(r) ≥ r∈Ri
r∈Ri
In other words, when choosing the maxP evolution strategy we only apply multisets of rules which are maximal with respect to inclusion; when choosing maxO we only apply multisets of rules which are maximal with
3. EVOLVING BY MAXIMIZING THE NUMBER OF RULES
103
respect to the number of objects (considered with their multiplicities) in the left hand side of the multiset; when choosing maxR we only apply multisets of rules which are maximal with respect to the number of rules in the multiset (considered with their multiplicities). Note that any multiset of rules which is either maxR or maxO-applicable is also maxP -applicable. P systems generally employ the maxP evolution strategy; however, maxO and maxR represent convincing alternatives. We denote by PO and PR the problems of finding a maxO or maxRapplicable multiset of rules, with respect to a given multiset of objects w. We can consider similar problems for the entire system, but they are solved by splitting the problems into smaller ones, one for each membrane. Thus for our purposes we can just consider the systems containing only one membrane, i.e. the degree of the P systems is n = 1. In other words, all multisets of rules we consider from now on are over a set of rules R. We use the following notations: • m is the cardinal of the alphabet O, and we consider the objects to be denoted by o1 , . . . , om ; • d is the number of rules associated to the membrane, and the rules are denoted by r1 , . . . , rd ; • Ca is the multiplicity of oa in the multiset w which is in the membrane; • ki,a is the multiplicity of oa in the left hand side of the rule ri . The problem PO can be described as an integer linear programming problem. Given the positive integers m, d, ki,a , Ca for i = 1, d and a = 1, m, find positive integers xi such that P P • ( a=1,m ki,a )xi is maximal; i=1,d P • i=1,d xi · ki,a ≤ Ca for all a = 1, m. The decision version of this problem was shown to be NP-complete in [CR1, CR2], and the proofs were based on the knapsack problem and integer linear programming. The problem PR can be described as follows. Given the positive integers m, d, ki,a , Ca for i = 1, d and a = 1, m, find positive integers xi such that P • xi is maximal; Pi=1,d • i=1,d xi · ki,a ≤ Ca for all a = 1, m.
The decision version of PR is denoted by DPR : being given positive integers m, d, t, ki,a and Ca , find whether there exist positive integers xi such that P • xi ≥ t; Pi=1,d • i=1,d xi · ki,a ≤ Ca for all a = 1, m. The length of this instance of the problem can be considered to be m + d + maxa,i {log Ca , log ki,a }. Proposition 2.12. DPR is NP-complete.
104
2. COMPLEXITY OF MEMBRANE SYSTEMS
Proof. First, we prove that DPR is in NP. To show this we construct a Turing machine that computes the result in nondeterministic polynomial time by either accepting (output YES) or rejecting (output NO) the input string. The machine operates as follows: (1) nondeterministically assign values for xi , i = 1, d; (2) if thePassigned values verify the constraints, (3) and i=1,d xi ≥ t, then output YES; (4) in any other case output NO. It is easy to see that the number of steps performed by the machine is polynomial with respect to the input size. Thus DPR is in NP. Secondly, we construct a polynomial-time reduction from 3CN F SAT to DPR . The 3CN F SAT problem asks whether a formula φ given in conjunctive normal form with 3 variables per clause is satisfiable, i.e. if there exists a variable assignment which makes the formula true [44]. Consider a formula φ with variables x1 , . . . , xr and clauses c1 , . . . , cs . We describe a corresponding instance of DPR : • d = 2r, m = r + s, t = r; • for each variable xi of φ we consider two variables yi and zi together with an inequality yi + zi ≤ 1 in the instance of DPR ; • for each clause ca we consider the inequality X X qi,a yi + li,a zi ≤ 2 i=1,r
i=1,r
such that: – qi,a = 0, li,a = 1 if the literal xi appears in ca ; – qi,a = 1, li,a = 0 if the literal ¬xi appears in ca ; – qi,a = li,a = 0 if neither xi nor ¬xi appear in ca . Since we consider t = r, the first inequality in this instance of DPR becomes P i=1,r yi + zi ≥ r. This can be computed in polynomial time with respect to the size of the input. The idea behind the reduction is to set xi = 1 if and only if yi = 1, zi = 0, and xi = 0 if and only if yi = 0, zi = 1. For example, consider the formula φ = c1 ∧ c2 ∧ c3 ∧ c4 with c1 = x1 ∨¬x2 ∨x3 , c2 = ¬x1 ∨¬x2 ∨¬x3 , c3 = x1 ∨¬x2 ∨¬x3 and c4 = ¬x1 ∨x2 ∨x3 . The corresponding instance of DPR is: P Find yi , zi (i = 1, 3) positive integers such that i=1,3 yi + zi ≥ 3, yi + zi ≤ 1 and: z + y2 + z 3 ≤ 2 1 y1 + y2 + y3 ≤ 2 z 1 + y2 + y 3 ≤ 2 y1 + z 2 + z 3 ≤ 2 We notice that yi + zi = 1 and that a solution is y1 = 0, y2 = 0, z3 = 0, together with the corresponding values for z1 , z2 , y3 . This means that we consider the assignment x1 = 0, x2 = 0, x3 = 1 for which the formula φ is satisfiable.
3. EVOLVING BY MAXIMIZING THE NUMBER OF RULES
105
We now prove that a formula φ is satisfiable if and only if there is a vector (y1 , . . . , yr , z1 , . . . , zr ) of positive integers which is a solution for the above instance of DPR . First, suppose there is a satisfying assignment for φ. If xi = 1 we set yi = 1, zi = 0, and if xi = 0Pwe set yi = 0, zi = 1. Thus we have yi + zi ≤ 1, for all i = 1, r, and also i=1,r yi + zi ≥ r. Consider now one of the inequalities X X qi,a yi + li,a zi ≤ 2. i=1,r
i=1,r
We notice that it contains in its left hand side exactly three variables with coefficient 1, one for each literal appearing in Ca . If the literal with value 1 in Ca is xj , then its corresponding variable is zj which is 0. If the literal with value 1 in Ca is ¬xj , then its corresponding variable is yj which is 0. Thus there are at most two terms equal to 1, meaning that the inequality is satisfied. Now suppose there is a solution (y1 , . . P . , yr , z1 , . . . , zr ) for the DPR instance. Since yi + zi ≤ 1 for all i = 1, r and i=1,r yi + zi ≥ r, it follows that yi + zi = 1 for all i. We consider the assignment xi = 1 if yi = 1, zi = 0, and xi = 0 if yi = 0, zi = 1. As previously noted, the inequality corresponding to a clause ca has exactly three variables, each with coefficient 1, in its left hand side. Thus at least one of them must be equal to 0. If that variable is zj , it means that the literal xj , with assignment xj = 1, appears in Ca . If that variable is yj , it means that the literal ¬xj , with assignment xj = 0, appears in Ca . Thus φ is satisfied. We can also consider the problem 1DPR obtained from DPR by restricting the possible values of the variables to 0 or 1. This corresponds to requesting that in a membrane a rule can be applied at most once. Then exactly the same reduction can be made from 3CN F SAT to 1DPR , and so placing 1DPR in the category of NP-complete problems. 3.2. Certain Subproblems. We denote by DPRk the problem obtained from DPR by considering m = k fixed. A similar notation is used for DPOk . We start by looking at the case of a P system which has only simple rules, i.e. rules which have only one type of object in their right hand side. Then DPR1 describes the decision version of the problem of finding a multiset of simple given d, t, ki,1 and C1 , find xi P P rules which is maxR-applicable: such that i=1,d xi ≥ t and i=1,d xi · ki,1 ≤ C1 . Proposition 2.13. DPR1 is in P.
Proof. Note that all ki,1 6= 0 by definition, because rules always have a non-empty left hand side. Let j be chosen such that kj,1 = mini=1,d {ki,1 }. h i A solution is given by setting xj = kCj,1 (the integer part of kCj,1 ), and xi = 0 for i 6= j.
106
2. COMPLEXITY OF MEMBRANE SYSTEMS
We can also consider the problem 1DPR1 as obtained by restricting the possible values of xi to 0 or 1. This problem is in P, and this can be seen by the following algorithm. First we renumber the coefficients ki,1 (together with the variables xi ) such that k1,1 ≤ k2,1 ≤ . . . ≤ kd,1 ; then we set Ps1 = k1,1 and si+1 = si + ki+1,1 . If sd ≤ C1 , then the maximum value for i xi is d. Otherwise, there exists P an unique j such that sj ≤ C1 < sj+1 . Therefore the maximum value for i xi is j, because whenever we choose j + 1 different coefficients kr1 ,1 , kr2 ,1 , . . . krj+1 ,1 randomly, their sum is greater than sj+1 . We now consider a membrane whose maxR evolution has only two types of objects, i.e. #O = 2. The corresponding decision problem is DPR2 . Proposition 2.14. DPR2 is NP-complete. To prove this result we consider the following auxiliary problem AP : For s, r, k positive integers, there are positive integers x1 , . . . , xs such that X X xi = r, ki xi = k. i=1,s
i=1,s
Note that if we restrict this problem by imposing the condition that all xi ∈ {0, 1}, then we obtain a subproblem of the subset sum problem, namely: given a set S of positive integers S = {ki | i = 1, s}, is there a subset of S with r elements such that the sum of its elements equals k? This provides a strong hint that AP is NP-complete. The proof of Proposition 2.14 is based on constructing a polynomial-time reduction from X3C to AP , and another one from AP to DPR2 .
Proof. First, let us note that both DPR2 and AP are in NP. This can be easily proved by constructing a Turing machine similar to the one used in the proof of Proposition 2.12. Secondly, we provide a a polynomial-time reduction from X3C to AP . The exact cover by 3-sets (X3C) problem asks if, given a set X with 3q elements and a collection C of 3-element subsets of X, there is a subcollection C ′ of C which is an exact cover for X, i.e. any element of X belongs to exactly one element of C ′ [44]. To reduce X3C to AP we do the following. Let l be the number of elements of C and consider an indexing c1 , . . . , cl of the elements of C. For each ci we consider a variable xi in the AP problem, thus setting s = l. To construct the coefficients ki , we employ the notations eij = #ci ∩ cj P (i, j = 1, l) and M = 3q + 1. We set s = l, r = q, ki = j=1,l eij · M l−j P and k = j=1,l 3 · M l−j . For a solution C ′ of X3C we set xi = 1 whenever ci ∈ C ′ , and xi = 0 otherwise. We prove that this yields a solution of the constructed instance of AP ; moreover, that any solution of the instance has xi ∈ {0, 1} and produces a solution of X3C. Example. Consider the problem X3C for X = {1, . . . , 9} and c1 = {1, 2, 3}, c2 = {1, 3, 4}, c3 = {4, 5, 6}, c4 = {1, 6, 8}, c5 = {4, 7, 9} and c6 = {7, 8, 9}. Then M = 10, and the coefficients ki are written in base 10 such that they have a digit for each variable xj :
3. EVOLVING BY MAXIMIZING THE NUMBER OF RULES
k1 k2 k3 k4 k5 k6 k
107
x1 x2 x3 x4 x5 x6 3 2 0 1 0 0 2 3 1 1 1 0 0 1 3 1 1 0 1 1 1 3 0 1 0 1 1 0 3 2 0 0 0 1 2 3 3 3 3 3 3 3
We can see that an exact cover of X is given by c1 , c3 , c6 . Looking at this example, we see why any solution to AP has all xi ∈ {0, 1}: all coefficients have at least a digit equal to 3, and the basis M is chosen such that when adding coefficients no carries can occur from lower digits to higher digits. We first prove that a solution C ′ for X3C gives a solution for AP . Let I = {i | ci ∈ C ′ }. Since C ′ is an exact cover for X, it follows that I has q elements, and that eij = 0 for i, j ∈ I, i 6= j. Moreover, if j 6∈ I then we P have that cj = cj ∩ (∪i∈I ci ) = ∪(cj ∩ ci ), and so i∈I P eij = 3. Since xi = 1 for i ∈ I and xi = 0 for i 6∈ I, it follows that indeed i=1,m xi = q. We also have X X X ki xi = ( eij M l−j ) = i=1,l
=
X
(eii M l−i +
i∈I
X
i∈I j=1,l
eij M l−j ) =
X i∈I
j6∈I
3 · M l−i +
XX ( eij )M l−j . j6∈I i∈I
Using the previous P remarks, we obtain that the term of the second sum is 3 · M l−j , and so i=1,m ki xi = k. Secondly, let us consider a solution (xi )i=1,s for the instance of AP with s, r, ki , k as above. Let I = {i | xi = 1}. We prove that if j 6∈ I then xj = 0, and that eij = 0 for i, j ∈ I, i 6= j. This is sufficient to prove that C ′ = {ci | i ∈ I} is an exact cover, because this follows from the above statement that C ′ has exactly q elements and c ∩ c′ = ∅ for all c, c′ ∈ C ′ , c 6= c′ . We have X X X X 3 · M l−j = k = ki xi = ( eij xi )M l−j (10) i=1,l
i=1,l
P
P
j=1,l i=1,l
Since i=1,l eij xi ≤ i=1,l 3xi = 3q < M , the two sides of equation (10) representP two decompositions in base M of the same number k. Therefore we have i=1,l eij xi = 3 for any j = 1, l. For i = j we get eii xi = 3xi ≤ 3, P i.e. all P xi ∈ {0, 1}. Thus 3 = i∈I eij . Considering j ∈ I we obtain that 3 = 3+ i∈I,i6=j eij , namely that eij = 0 for i, j ∈ I, i 6= j, and so concluding the second part of the reduction. We still should show that AP reduces to DPR2 . We recall the data of DPR2 : Given d, t, C1 , C2 , ki,1 , ki,2 (i = 1, d) are there positive integers
108
2. COMPLEXITY OF MEMBRANE SYSTEMS
x1 , . . . , xd such that (11)
P Pi=1,d xi ≥ t ki,1 xi ≤ C1 Pi=1,d i=1,d ki,2 xi ≤ C2 ?
The reduction is as follows: let K = maxi=1,d ki and set d = s, t = r, ki,1 = ki , ki,2 = K − ki , C1 = k, C2 = Kr − k. If x1 , . . . , xs is a solution for the instance of AP , it is also a solution for this instance of DPR2 . Reversely, if x1 , . . . , xs is a solution for P this instance of DPR2 , we sum P the last two inequalities of (11), obtaining i=1,s K · xi ≤ Kr. Since i=1,d xi ≥ t, we P P obtain that i=1,s xi = r and also that i=1,s ki xi = k. We contrast these results with those for DPO and its analogous subproblems. Both DPO and DPR are NP-complete. However we obtain significant differences when restricting to the case of P systems with simple rules. Namely, while DPO1 is NP-complete, DPR1 is in P. When we employ cooperative rules with a fixed maximum number k > 1 of objects in the left hand side, the decision problems thus obtained, DPOk and DPRk , are all NP-complete. Together with the results presented in the previous sections, these results open the possibility of studying complexity and computability for new classes of P systems. It also facilitates a complexity comparison between various classes of P systems. 4. Strategies of Using the Rules of a P System in a Maximal Way We examine the complexity of P systems which use two strategies of applying the rules. One strategy is to maximize the number of objects used in rules and the other is to maximize the number of rules applied in each membrane. For P systems with cooperative multiset rewriting rules, P systems with active membranes, and P systems with symport/antiport rules we prove the computational universality for both these types of parallelism. The computational complexity of the maximum consuming systems is studied for systems with cooperative rules of two types, by using two known combinatorial NP-complete problems, namely the knapsack problem and the integer linear programming. We consider the maximum consuming strategies, defining the transitions in a P system in such a way to maximize the number of objects evolved in each membrane, as well as the related case of applying a multiset of rules of a maximal cardinality. First we investigate the computing power of three basic classes of P systems for these strategies. Specifically, we consider P systems with multiset rewriting rules, with symport/antiport rules, and with active membranes. In all cases (with cooperative rules in the first one) we prove the universality of the considered systems. (The proofs are rather standard in this area: simulations of register machines.) All
4. STRATEGIES OF USING THE RULES OF A P SYSTEM IN A MAXIMAL WAY109
these results need improvements. For instance, we know that for the usual maximal parallelism (in the multiset inclusion sense), also P systems with catalytic rules are universal, and a similar result is expected also for the two types of maximal parallelism considered here. Then, for symport/antiport P systems, our proof uses antiport rules of weight 2 (two objects together are moved inside or outside a membrane), but, for the “standard maximal parallelism”, also systems with symport/antiport rules of weight 1 or with symport rules of weight 2 are universal, [4]; similar results remains to be proved also for the new forms of maximality. In turn, in the case of P systems with active membranes, we use here (two) polarizations, but it is known that universality results can be obtained also for non-polarized membranes in the case of usual maximal parallelism; this topic remains to be considered also for the new maximality modes. 4.1. Types of Maximal Parallelism. In the standard maximally parallel way of using the rules (we denote this case with maxP), the objects of wi are non-deterministically assigned to rules in Ri until no further object can be associated with a rule, and those objects are evolved which were assigned (clearly, the objects evolve through the respective rules with which they were associated). Otherwise stated, we choose a multiset of rules in Ri (we choose a subset Ri′ ⊆ Ri and we associate multiplicities to rules in Ri′ ); this multiset (hence, having the support Ri′ ) is applied only if it has the following two properties: (1) its rules can be applied to the objects in wi , and (2) no rule can be added to Ri′ and no multiplicity of rules in Ri′ can be increased so that the obtained multiset is still applicable to wi . A simple example will be discussed immediately. Thus, the applied multiset of rules should be maximal, in the sense of inclusion, among the multisets of rules applicable to the objects in wi . This maximality does not ensure that the number of evolved objects or the number of used rules are maximal. Indeed, let us assume that wi = abcde and that Ri consists of the following rules (the multisets u1 − u4 in the right hand of rules are not relevant): r1 : ab → u1 , r2 : c → u2 , r3 : bc → u3 , r4 : acde → u4 . In the table below we indicate all multisets (in this case, the multiplicity of used rules is always 1) of applicable rules, together with the number of objects evolved and the number of rules used: Case Rules used No. obj. No. rules Types of maximality 1 r1 2 1 none 2 r2 1 1 none 3 r3 2 1 maxP 4 r4 4 1 maxO, maxP 5 r 1 , r2 3 2 maxR, maxP No rule can be added in cases 3, 4, 5, hence for them we have found a multiset of rules which can be applied in the maxP mode.
110
2. COMPLEXITY OF MEMBRANE SYSTEMS
Note that the number of objects is bigger in case 4 than in all others and that the number of rules is the biggest in case 5. This suggests defining two new strategies of using the rules: in such a way to maximize the number of evolving objects (we denote it with maxO) and in such a way to apply a maximal number of rules (denoted by maxR). Of course, if several multisets of rules lead to the same maximal number of objects or of rules, then one multiset is non-deterministically chosen, as usual also for maxP. The previous table contains such cases – and it is important to note that they do not coincide, but, clearly, a transition which is of type maxO or maxR is also of type maxP (adding a rule to the multiset would increase the number of evolved objects and, trivially, of applied rules). For non-cooperative rules, the three types of maximal parallelism coincide. This is not the case for the catalytic rules: for wi = cab (c is the catalyst) and Ri = {r1 : ca → u1 , r2 : cb → u2 , r3 : a → u3 },
both r1 and r2 r3 are maxP, but only the second multiset is also maxO and maxR. The previous example can be transferred to a symport/antiport system: assume that c is outside membrane i, ab are inside, and the rules r1 : (c, in; a, out), r2 : (c, in; b, out), r3 : (a, out) are associated with membrane i. As above, both r1 and r2 r3 are maxP, but only r2 r3 is also maxO and maxR. Using symport or antiport rules with more objects in their multisets (of a higher weight) will lead to still more interesting cases, like in the first example considered above. A similar situation appears for rules with active membranes: consider a membrane i with objects ab inside and subject to the following rules r1 : [ a ] i → [ ] i a, r2 : [ b ] i → [ ] i b, r3 : [ a → u] i .
Like in the symport/antiport case, both r1 and r2 r3 are maxP, but only r2 r3 is also maxO and maxR. The fact that the three modes of maximal parallelism are different for the three types of P systems mentioned above makes interesting the study of the computing power of these systems for the new types of maximality, maxO and maxR, and this will be done in the next section. 4.2. Computational Power. For the sake of readability, we start by recalling the formal notations for (cell-like) P systems with multiset rewriting rules. Such a system is a construct Π = (O, µ, w1 , . . . , wm , R1 , . . . , Rm , io ), where O is the alphabet of objects, µ is the membrane structure (with m membranes), w1 , . . . , wm are (strings over O representing) multisets of objects present in the m regions of µ at the beginning of a computation, R1 , . . . , Rm are finite sets of evolution rules, associated with the regions of µ, and io is the label of a membrane, used as the output membrane. The general form of rules is u → v, where u ∈ O+
4. STRATEGIES OF USING THE RULES OF A P SYSTEM IN A MAXIMAL WAY111
and v ∈ (O ∪ (O × {out, in})∗ ; when applying such a rule, objects of u are “consumed” and those of v are “produced” and immediately transported to the region indicated by the targets out, in, i.e., outside the membrane where the rule was used in the case of out, or to any of its directly inner membranes, non-deterministically chosen, in the case of in. We denote by NmaxX (Π) the set of numbers computed by Π in the maxX mode, with X ∈ {P, O, R}, and maxX (type-of-rules) the family of sets N by N OPm maxX (Π) computed by systems Π with at most m membranes and rules as indicated by type-of-rules; as types of types we consider here only coo, indicating cooperative rules. It is known that N OP1maxP (coo) = N RE, where N RE is the family of Turing computable sets of numbers, and a similar result is true for catalytic rules. Such a result is also true for maxO and maxR. All proofs are based on simulating register machines; as it is known (see, e.g., [77]), the register machines (even with a small number of registers, three) compute all sets of numbers which are Turing computable, hence they characterize N RE. A register machine M computes a number n in the following way: we start with all registers empty (i.e., storing the number zero), we apply the instruction with label l0 and we proceed to apply instructions as indicated by the labels (and made possible by the contents of registers); if we reach the halt instruction, then the number n stored at that time in the first register is said to be computed by M . The set of all numbers computed by M is denoted by N (M ). Theorem 2.15. N OP1maxX (coo) = N RE, for all X ∈ {P, O, R}. Proof. The case of X = P is know, we have mentioned it only for the sake of uniformity. Consider a register machine M = (m, H, l0 , lh , I) as above (the number of registers is not relevant), and construct the P system Π = (O, [ ] 1 , l0 , R1 , 1), with the alphabet O = {ar | 1 ≤ r ≤ m} ∪ {l, l′ , l′′ , l′′′ , liv | l ∈ H} ∪ {#} and the set of rules obtained as follows (the value of register r is represented in Π by the number of copies of object ar present in the unique membrane of Π): (1) For each ADD instruction li : (ADD(r), lj , lk ) of M we introduce in R1 the rules li → lj ar , li → lk ar . This is an obvious simulation of the ADD instruction.
112
2. COMPLEXITY OF MEMBRANE SYSTEMS
(2) For each SUB instruction li : (SUB(r), lj , lk ) of M we introduce in R1 the rules li → li′ li′′ , li′ ar → li′′′ ,
li′′ → liiv ,
liiv li′′′ → lj ,
liiv li′ → lk .
If any copy of ar is present, the “checker” liiv finds in the membrane the object li′′′ and introduces the label lj , otherwise the label lk is introduced, which means that the SUB instruction is correctly simulated. Note that in both cases, the three maximality modes coincide. (3) In order to correctly simulate a computation of M , we have to remove all objects different from a1 . In the case of maxR, this can be done by means of the following rules (also working for the maxP case): lh ar → lh , r ∈ {2, 3, . . . , m}, lh → lh′ lh′′ , lh′ ar → #, r ∈ {2, 3, . . . , m}, # → #, lh′′ → lh′′′ , lh′′′ lh′ → λ. After removing objects ar , r 6= 1 (one rule is used in each step), one passes to checking whether any object ar , r 6= 1, is still present and only in the negative case one also remove the primed versions of lh . In the case of maxO, only the first rule above, together with lh → λ are enough: this last rule cannot be used as long as any rule lh ar → lh , r 6= 1, can be used. Of curse, this last group of rules can be avoided if we start from a register machine which ends the computations with all registers empty excepting the first one (then, the rule lh → λ is sufficient in all cases), but we wanted to make an explicit use of the restriction imposed by the maxO mode in ensuring the correct simulation of the register machine. 4.3. Universality for Symport/Antiport Rules. A symport/antiport P system is a construct Π = (O, µ, w1 , . . . , wm , E, R1 , . . . , Rm , io ), where O, µ, w1 , . . . , wm , and io are as above, E ⊆ O is the set of objects appearing in the environment, in arbitrarily many copies, and
4. STRATEGIES OF USING THE RULES OF A P SYSTEM IN A MAXIMAL WAY113
R1 , . . . , Rm are finite sets of symport/antiport rules associated with the membranes of µ. The rules are of the forms (u, in) or (u, out) (symport rules) and (u, out; v, in) (antiport rules), where u, v ∈ O+ . Using such a rule from Ri means moving across membrane i the objects specified by u and v, in the indicated directions. The maximal length of u, v in all rules of the system defines the weight of the system. We denote by NmaxX (Π) the set of numbers computed by Π in the maxX mode, maxX (sym , anti ) the family of sets with X ∈ {P, O, R}, and by N OPm p q NmaxX (Π) computed by systems Π with at most m membranes, symport rules of weight at most p, and antiport rules of weight at most q. It is known that N OP3mP (sym1 , anti1 ) = N OP3mP (sym2 , anti0 ) = N RE. A similar result is also true for maxO and maxR. Theorem 2.16. N OP1maxX (sym1 , anti2 ) = N RE, for all X {P, O, R}.
∈
Proof. Consider a register machine M = (m, H, l0 , lh , I) as above, this time making the assumption (which does not restrict the generality) that in the end of the computations only the first register may be non-empty. We construct the P system Π = (O, [ ] 1 , l0 , O, R1 , 1), with the alphabet O = {ar | 1 ≤ r ≤ m} ∪ {l, l′ , l′′ , l′′′ , liv | l ∈ H}
and the set of rules obtained as follows: (1) For each ADD instruction li : (ADD(r), lj , lk ) of M we introduce in R1 the rules (li , out; lj ar , in), (li , out; lk ar , in). (2) For each SUB instruction li : (SUB(r), lj , lk ) of M we introduce in R1 the rules (li , out; li′ li′′ , in), (li′ ar , out; li′′′ , in), (li′′ , out; liiv , in), (liiv li′′′ , out; lj , in), (liiv li′ , out; lk , in). Like in the previous proof, the maximal parallelism of any type ensures the correct simulation of the instructions of M . (3) In order to finish the simulation we just add the rule (lh , out). The three maximality modes maxX, X ∈ {P, O, R}, coincide, and this concludes the proof.
114
2. COMPLEXITY OF MEMBRANE SYSTEMS
4.4. Universality for Rules with Active Membranes. The definition of a P system with active membranes is the same as for systems with multiset rewriting rules, but the membranes are explicit parts of rules. In what follows we only use three types of rules (evolution, send-in, and sendout), but we use two polarizations of membranes. Specifically, we use rules of the following forms: (a) [ a → u] ei , with a ∈ O, u ∈ O∗ , and e ∈ {0, +}, (b) [ a ] ei → [ ] fi b, with a, b ∈ O and e, f ∈ {0, +}, (c) a[ ] ei → [ b ] fi , with a, b ∈ O and e, f ∈ {0, +}. Note that rules of types (b) and (c) are symbol-to-symbol and that they can change the polarization of the membrane. When specifying a P system with active membranes, we have to define all components as in a system with multiset rewriting, with only one set of rules: the rules precisely identify the membranes where they are applied. maxX ((a), (b), (c)) the family of sets N We denote by N OPm maxX (Π) computed by systems Π with active membranes, using at most m membranes and rules of the three types defined above. The counterpart of the previous results for the case of rules with active membranes leads to the next result, which is of some interest also for the maxP case, because the proof is done here starting from register machines, not from matrix grammars with appearance checking, as usual in the literature for this case. maxX ((a), (b), (c)) = N RE, for all m ≥ 3 and Theorem 2.17. N OPm X ∈ {P, O, R}.
Proof. Consider a register machine M = (m, H, l0 , lh , I) as above, again with the assumption that in the end of the computations only the first register may be non-empty. Because three registers ensure the universality, we assume that m = 3. We construct the P system Π = (O, B, [ [ ] 01 [ ] 02 ] 03 , λ, λ, l0 , R, 1), with O = {ar | r = 1, 2, 3} ∪ {l, l′ , l′′ , l′′′ , liv , lv , lvi | l ∈ H}, B = {1, 2, 3}, with 3 being the skin membrane,
and the set of rules obtained as follows; with each register r = 1, 2, 3 of M we associate a membrane with label r and also an object ar : (1) For each ADD instruction li : (ADD(r), lj , lk ), r = 1, 2, of M we introduce in R the rules li [ ] 0r → [ li′ ] 0r ,
[ li′ → lj ar ] 0i ,
[ li′ → lk ar ] 0i , [ lg ] r → [ ] r lg , g ∈ {j, k},
4. STRATEGIES OF USING THE RULES OF A P SYSTEM IN A MAXIMAL WAY115
and for r = 3 we consider the rules [ li → lg a3 ] 03 , g ∈ {j, k}. The simulation of the ADD instruction is obvious. (2) For each SUB instruction li : (SUB(r), lj , lk ), r = 1, 2, of M we introduce in R the rules li [ ] 0r → [ li′ ] + r ,
0 [ ar ] + r → [ ]r,
[ li′ → li′′ ] + r ,
[ li′′ ] 0r → [ ] 0r lj ,
0 [ li′′ ] + r → [ ] r lk ,
while for r = 3 we consider the following rules: 1. 2. 3. 4. 5.
[ li → li′ li′′ ] 03 ,
[ li′ → li′′′ ] 03 ,
iv [ li′′ ] 03 → [ ] + 3 li ,
0 [ a3 ] + 3 → [ ] 3 a3 ,
[ li′′′ → liv ] + 3, [ liv → lj ] 03 ,
+ vi [ liv ] + 3 → [ ] 3 li , 0 livi [ ] + 3 → [ lk ] 3 .
The interplay of primed versions of li and the polarizations ensures now the correct simulation of the SUB instruction in all modes maxX, X ∈ {O, P, R}. The simulation of the SUB instructions for register 3 is slightly more difficult than for other registers: In the first step we introduce two objects, one for producing the “checker” liv , and one for changing the polarization of the skin membrane. With the polarization changed to positive, an object a3 can be removed (due to the maximal parallelism, it must be removed if it exists), and this is recorded in changing back the polarization to neutral. Now, the “checker” liv introduces the correct object lj or lk , depending on the membrane polarization (returning this polarization to neutral in the case the register was empty – and this requires two steps, for sending out and bringing back primed version of li . The three maximality modes maxX, X ∈ {P, O, R}, coincide, and the end of a computation in M coincides with the halting of a computation in Π and N (M ) = N (P i), which concludes the proof. The optimality of this result in what concerns the number of membranes and the use of polarizations remains to be investigated.
116
2. COMPLEXITY OF MEMBRANE SYSTEMS
4.5. Open Problems. All these results need further investigations, for improving them to the complexity level of systems used in universality proofs for the maxP case. Several directions of research could be settled in a similar way as in the case of maxP). The parallelism (together with the possibility of creating an exponential workspace in linear time) is the basic ingredient for devising efficient solutions to computationally hard problems, mainly in terms of P systems with active membranes. The parallelism is traditionally maximal in the sense of maxP. However other types of maximality, namely maxO and maxR, could also be used. Maximizing the number of objects or of rules can be related to the idea of using energy for controlling the computations [93], where the rules to be used can be chosen in such a way to maximize or minimize the consumed energy. Is it possible to relate these two ideas?
Bibliography [ACR] O. Agrigoroaiei, G. Ciobanu, A. Resios. Evolving by Maximizing the Number of Rules: Complexity Study. Lecture Notes in Computer Science vol.5957, 149–157, 2009. [CG] G. Ciobanu, M. Gontineac. Mealy Membrane Automata and P Systems Complexity. Cellular Computing; Complexity Aspects, Fenix Editora, 149–164, 2005. [CR1] G. Ciobanu, A. Resios. Computational Complexity of Simple P Systems. Fundamenta Informaticae vol.87, 49–59, 2008. [CR2] G. Ciobanu, A. Resios. Complexity of Evolution in Maximum Cooperative P Systems. Natural Computing vol.8, 807–816, 2009. [CMP] G. Ciobanu, S. Marcus, Gh. P˘ aun. New Strategies of Using the Rules of a P System in a Maximal Way. Romanian Journal of Information Science and Technology vol.12, 157–173, 2009.
117
CHAPTER 3
Mobile Membranes and Links to Ambients and Brane Calculi In this chapter we present the systems of simple, enhanced, and mutual mobile membranes, as well as the mutual membranes with objects on surface. These systems are introduced, and their modelling and computational power are studied. Between some of these classes on one hand, and mobile ambients and brane calculi on other hand, are established certain relationships. The systems of simple mobile membranes are a variant of P systems with active membranes having none of the features like polarizations, label change, division of non-elementary membranes, priorities, or cooperative rules. The additional feature considered instead are the operations of endocytosis and exocytosis: moving a membrane inside a neighbouring membrane, or outside the membrane where it is placed. However, these operations are slightly different in the papers introducing them: in [33] one object is specified in each membrane involved in the operation, while in [67] one object is mentioned only in the moving membrane. Another variant of P systems with mobile membranes is mobile P systems [95] having rules inspired from mobile ambients [18]. Turing completeness is obtained by using nine membranes together with the operations of endocytosis and exocytosis [67]. Using also some contextual evolution rules (together with endocytosis and exocytosis), in [64] it is proven that four mobile membranes are enough to get the power of a Turing machine, while in [AC11] we decrease the number of membranes to three. In order to simplify the presentation, we use systems of simple mobile membranes instead of P systems with mobile membranes. The systems of enhanced mobile membranes are a variant of membrane systems which we proposed in [AC7] for describing some biological mechanisms of the immune system. The operations governing the mobility of the systems of enhanced mobile membranes are endocytosis (endo), exocytosis (exo), forced endocytosis (fendo) and forced exocytosis (fexo). The computational power of the systems of enhanced mobile membranes using these four operations was studied in [CK1] where it is proved that twelve membranes can provide the computational universality, while in [AC11] we improved the result by reducing the number of membranes to nine. It is worth to note that unlike the previous results, the rewriting of object by means of context-free rules is not used in any of the results (proofs). 119
120 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Following our approach from [AC10] we define systems of mutual mobile membranes representing a variant of systems of simple mobile membranes in which the endocytosis and the exocytosis work whenever the involved membranes “agree” on the movement; this agreement is described by using dual objects a and a in the involved membranes. The operations governing the mobility of the systems of mutual mobile membranes are mutual endocytosis (mutual endo), and mutual exocytosis (mutual exo). It is enough to consider the biologically inspired operations of mutual endocytosis and mutual exocytosis and three membranes (compartments) to get the full computational power of a Turing machine [AC12]. Three represents the minimum number of membranes in order to discuss properly about the movement provided by endocytosis and exocytosis: we work with two membranes inside a skin membrane. Membrane systems [89, 90] and brane calculus [17] have been inspired from the structure and the functioning of the living cell. Although these models start from the same observation, they are build having in mind different goals: membrane systems investigate formally the computational nature and power of various features of membranes, while the brane calculus is capable to give a faithful and intuitive representation of the biological reality. In [19] the initiators of these two formalisms describe the goals they had in mind: “While membrane computing is a branch of natural computing which tries to abstract computing models, in the Turing sense, from the structure and the functioning of the cell, making use especially of automata, language, and complexity theoretic tools, brane calculi pay more attention to the fidelity to the biological reality, have as a primary target systems biology, and use especially the framework of process algebra.” In [AC8] we define a new class of systems of mobile membranes, namely the systems of mutual membranes with objects on surface. The inspiration to add objects on membrane and to use the biologically inspired rules pino/exo/phago comes from [12, 19, 26, 65, 66]. The novelty comes from the fact that we use objects and co-objects in phago and exo rules in order to illustrate the fact that both involved membranes agree on the movement. We investigate in [AC8] the computational power of systems of mutual membranes with objects on surface controlled by pairs of rules: pino/exo or phago/exo, proving that they are universal with a small number of membranes. Similar rules are used by another formalism called brane calculus [17]. We compare in [AC8] the systems of mutual membranes with objects on surface with brane calculus, and encode a fragment of brane calculus into the newly defined class of systems of mobile membranes. Even brane calculus have an interleaving semantic and membrane systems have a parallel one, by performing this translation we show that the difference between the two models is not significant. The membrane systems [89, 90] and the mobile ambients [18] have similar structures and common concepts. Both have a hierarchical structure representing locations, and are used to model various aspects of biological
3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI 121
systems. The mobile ambients are suitable to represent the movement of ambients through ambients and the communication which takes place inside the boundaries of ambients. Membrane systems are suitable to represent the movement of objects and membranes through membranes. We consider these new computing models used in describing various biological phenomena, and encode the ambients into membrane systems [AC1]. We present such an encoding, and use it to describe the sodium-potassium exchange pump [AC6]. We provide an operational correspondence between the safe ambients and their encodings, as well as various related properties of the membrane systems [AC6]. In [AC4] we investigate the problem of reaching a configuration from another configuration in a special class of systems of mobile membranes. We prove that the reachability can be decided by reducing it to the reachability problem of a version of pure and public ambient calculus without the capability open. A feature of current membrane systems is the fact that objects and membranes are persistent. However, this is not quite true in the real world. In fact, cells and intracellular proteins have a well-defined lifetime. Inspired from these biological facts, we define in [AC13] a model of systems of mobile membranes in which each membrane and each object has a timer attached representing their lifetime. Some results show that systems of mutual mobile membranes with and without timers have the same computational power. Since we have defined an extension with time for mobile ambients in [AC2, AC3, AC9], and one for systems of mobile membrane in [AC13], we study the relationship between these two extensions: timed safe mobile ambients are encoded into systems of mutual mobile membranes with timers. 0.6. Preliminaries. We refer to [36] and [99] for the elements of formal language theory we use here. For an alphabet V , we denote by V ∗ the set of all strings over V ; λ denotes the empty string. V ∗ is a monoid with λ as its unit element. The length of a string x ∈ V ∗ is denoted by |x|, and |x|a denotes the number of occurrences of symbol a in x. A multiset over an alphabet V is represented by a string over V (together with all its permutations), and each string precisely identifies a multiset; the Parikh vector associated with the string indicates the multiplicities of each element of V in the corresponding multiset. We now briefly recall details of the following: (1) Parikh Vector: For V = {a1 , . . . , an }, the Parikh mapping associated with V is ψV : V ∗ → Nn defined by ψV (x) = (|x|a1 , . . . , |x|an ). For a language L, its Parikh set ψV (L) = {ψV (x) | x ∈ L} is the set of all Parikh vectors of all words x ∈ L. For a family F L of languages, we denote by P sF L the family of Parikh sets of vectors associated with languages in F L. (2) E0L systems: An E0L system is a context-free pure grammar with parallel derivations : G = (V, T, ω, R) where V is the alphabet, T ⊆ V is the terminal alphabet, ω ∈ V ∗ is the axiom, and R is a finite set
122 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
of rules of the form a → v with a ∈ V and v ∈ V ∗ such that for each a ∈ V there is at least one rule a → v in R. For w1 , w2 ∈ V ∗ , we say that w1 ⇒ w2 if w1 = a1 . . . an , w2 = v1 v2 . . . vn for ai → vi ∈ R, 1 ≤ i ≤ n. The generated language is L(G) = {x ∈ T ∗ | ω ⇒∗ x}. The family of languages generated by E0L systems is denoted by E0L. (3) ET0L systems: An ET0L system is a construct G = (V , T , ω, R1 , . . . Rn ) such that each quadruple (V, T, ω, Ri ) is an E0L system; each Ri is called a table, 1 ≤ i ≤ n. The generated language is defined as L(G) = {x ∈ T ∗ | ω ⇒Rj1 w1 ⇒Rj2 · · · ⇒Rjm wm = x; m ≥ 0; 1 ≤ ji ≤ n; 1 ≤ i ≤ m}. The family of languages generated by ET0L systems is denoted by ET 0L. In the sequel, we make use of the following normal form for ET0L systems: each language L ∈ ET 0L can be generated by an ET0L system G = (V, T, ω, R1 , R2 ) having only two tables. Moreover, from the proof of Theorem V.1.3 in [98], we see that any derivation with respect to G starts by several steps of R1 , then R2 is used exactly once, and the process is iterated; the derivation ends by using R2 . (4) Matrix grammars: A context-free matrix grammar without appearance checking is a construct G = (N, T, S, M ) where N, T are disjoint alphabets of non-terminals and terminals, S ∈ N is the axiom, and M is a finite set of matrices of the form m : (A1 → x1 , A2 → x2 , . . . , An → xn ), n ≥ 1, with Ai → xi a rewriting rule, Ai ∈ N , xi ∈ (N ∪ T )∗ , for all 1 ≤ i ≤ n. For a string w, a matrix m : (r1 , r2 , . . . , rn ) is executed by applying the productions r1 , r2 , . . . , rn one after another, following the order in which they appear in the matrix. We write w ⇒m z if there is a matrix m : (A1 → x1 , A2 → x2 , . . . , An → xn ) in M and strings w1 , w2 , . . . , wn+1 in (N ∪ T )∗ such that w = w1 , wn+1 = z, and for each 1 ≤ i ≤ n we have wi = wi′ Ai wi′′ , wi+1 = wi′ xi wi′′ . The language generated by G is L(G) = {x ∈ T ∗ | S ⇒mi1 x1 ⇒mi2 x2 ⇒mi3 . . . ⇒mik xk = x, k ≥ 1, mij ∈ M for 1 ≤ j ≤ k}. The family of languages generated by context-free matrix grammars is denoted by M AT . A matrix grammar with appearance checking has an additional component F , a set of occurrences of rules in M . For w, z ∈ (N ∪ T )∗ we write w ⇒m z if there is a matrix m : (A1 → x1 , A2 → x2 , . . . , An → xn ) in M and strings w1 , w2 , . . . , wn+1 in (N ∪ T )∗ such that w = w1 , wn+1 = z, and, for 1 ≤ i ≤ n, we have wi = wi′ Ai wi′′ , wi+1 = wi′ xi wi′′ , or Ai does not occur in wi , Ai → xi ∈ F and wi+1 = wi . This means that either the rules can be applied in order to obtain z from w, or the ith rule of M can be skipped and wi+1 = wi whenever the rule is not applicable (Ai does not occur in
1. SIMPLE MOBILE MEMBRANES
123
wi ). The family of languages generated by matrix grammars with appearance checking is denoted by M ATac . It has been shown in [41] that each recursively enumerable language can be generated by a matrix grammar in the strong binary normal form. Such a grammar is a construct G = (N, T, S, M, F ), where N = N1 ∪ N2 ∪ {S, #}, with these three sets mutually disjoint, two distinguished symbols B (1) , B (2) ∈ N2 , and the matrices in M of one of the following forms: (1) (S → XA), with X ∈ N1 , A ∈ N2 , (2) (X → Y, A → x), with X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , (3) (X → Y, B (j) → #), with X, Y ∈ N1 , j = 1, 2, (4) (X → λ, A → x), with X ∈ N1 , A ∈ N2 , x ∈ T ∗ . Moreover, there is only one matrix of type 1, and F consists of all the rules B (j) → #, j = 1, 2 appearing in matrices of type 3. # is a trap-symbol, once introduced it is never removed. Clearly, a matrix of type 4 is used only once, in the last step of a derivation. We denote by RE, CS the families of languages generated by arbitrary, and context-sensitive grammars, respectively. It is known that P sE0L ⊆ P sET 0L ⊂ P sCS ⊂ P sRE; P sM AT ⊂ P sCS ⊂ P sRE; P sM AT ⊂ P sM ATac = P sRE. 1. Simple Mobile Membranes Definition 3.1 ([67]). A system of n ≥ 1 simple mobile membranes is a construct Π = (V, H, µ, w1 , . . . , wn , R), where n ≥ 1 represents the number of membranes; V is an alphabet of symbols; its elements are called objects; H is a finite set of labels for membranes; µ ⊆ H × H describes the membrane structure, such that (i, j) ∈ µ denotes that the membrane labelled by j is contained into the membrane labelled by i; we distinguish the external membrane (usually called the “skin” membrane) and several internal membranes; a membrane without any other membrane inside it is said to be elementary; (5) w1 , . . . , wn are strings over V , describing the multisets of objects placed initially in the n membranes of µ; (6) R is a finite set of developmental rules, of the following forms: (a) [[a → v]m ]k , for k, m ∈ H, a ∈ V , v ∈ V ∗ ; local object evolution These rules are called local because the evolution of an object a of membrane m is possible only when membrane m is inside membrane k. If the restriction of nested membranes is not imposed, that is, the evolution of the object a in membrane m is allowed irrespective of where membrane m is placed, then we (1) (2) (3) (4)
124 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
say that we have a global evolution rule, and write it simply as [a → v]m . (b) [a]h [ ]m → [[b]h ]m , for h, m ∈ H, a, b ∈ V ; endocytosis An elementary membrane labelled h (containing an object a) enters the adjacent membrane labelled m; the labels h and m remain unchanged during the process; however the object a may be modified to object b during the operation; m is not necessarily an elementary membrane. (c) [[a]h ]m → [b]h [ ]m , for h, m ∈ H, a, b ∈ V ; exocytosis An elementary membrane labelled h (containing an object a) is sent out of a membrane labelled m; the labels of the two membranes remain unchanged, but the object a of membrane h may be modified to object b during this operation; membrane m is not necessarily elementary. The rules are applied according to the following principles: (1) All rules are applied in parallel, non-deterministically choosing the rules, the membranes, and the objects, but in such a way that the parallelism is maximal; this means that in each step we apply a set of rules such that no further rule can be added to the set. (2) The membranes m, k from the rules of type (a) − (c) are said to be passive, while the membrane h is said to be active. In any step of a computation, any object and any active membrane can be involved in at most one rule, but the passive membranes are not considered involved in the use of the rules (hence they can be used by several rules at the same time as passive membranes); for instance, a rule [a → v]m , of type (a), is considered to involve only the object a, not also the membrane m. (3) The evolution of objects and membranes takes place in a bottomup manner. After having a set of rules chosen, they are applied starting from the innermost membranes, level by level, up to the skin membrane (all these sub-steps form a unique evolution step, called a transition step). (4) When a membrane is moved across another membrane, by endocytosis or exocytosis, its whole contents (its objects) are moved; because of the bottom-up way of using rules, the inner objects first evolve (if rules are applicable for them), and then any membrane is moved with the contents as obtained after this inner evolution. (5) If a membrane exits the system (by exocytosis), then its evolution stops, even if there are rules of type (a) which would be applicable to it provided that the membrane would be in the system.
1. SIMPLE MOBILE MEMBRANES
125
(6) All objects and membranes which do not evolve at a given step (for a given choice of rules which is maximal) are passed unchanged to the next configuration of the system. 1.1. Computational Power. The family of all sets P s(Π) generated inside an output membrane by systems of simple mobile membranes using local evolution rules, endocytosis and exocytosis rules is denoted by P sM M (levol, endo, exo); when global evolution rules are used levol is replaced by gevol. If a type of rules is not used, then its name is omitted from the list. The number of membranes does not increase during the computation, but it can decrease by sending membranes out of the system. In this case, the P sM Mn (endo, exo) denotes the family of sets of vectors of natural numbers computed by using at most n membranes. If the number of membrane is not bounded, this is denoted by P sM M∗ (endo, exo). The following inclusions are a direct consequence of the previous definitions: Lemma 3.2 ([67]). P sM Mn (endo, exo) ⊆ P sM Mn+1 (endo, exo) ⊆ P sM M∗ (endo, exo) ⊆ P sM M (gevol, endo, exo) ⊆ P sM M (levol, endo, exo) ⊆ P sRE, for all n ≥ 1. For the proof of the following result, which establishes an universality result using nine membranes and the operations of endocytosis and exocytosis, we refer to [67]. Theorem 3.3 ([67]). P sM M9 (endo, exo) = P sRE. A strengthening of the universality result from the previous theorem is: Corollary 3.4 ([67]). P sM M∗ (endo, exo) = P sM Mn (endo, exo) = P sM M (levol, endo, exo) = P sM M (gevol, endo, exo) = P sRE, for all n ≥ 9. An improvement of the result presented in Theorem 3.3 is given in [64] along with the appropriate proof: Theorem 3.5 ([67]). P sM M4 (gevol, endo, exo) = P sRE. Following the standard approach from membrane computing, the identification of membrane systems with a minimal set of ingredients which are powerful enough to achieve the full power of Turing machines, we improved this result by decreasing the number of membranes to three. This was achieved by using local evolution rules (levol) instead of global evolution rules (gevol). Theorem 3.6 ([AC11]). P sM M3 (levol, endo, exo) = P sRE. Proof. From Lemma 3.2 we have that P sM M3 (levol, endo, exo) ⊆ P sRE, so in what follows we prove that P sRE ⊆ P sM M3 (levol, endo, exo). Consider a matrix grammar G = (N, T, S, M, F ) in the improved strong binary normal form (hence with N = N1 ∪ N2 ∪ {S; †}), having n1 matrices
126 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
of types (2) and (4) (that is, not used in the appearance checking mode), and n2 matrices of type (3) (with appearance checking rules). Let B (1) and B (2) be the two objects in N2 for which we have rules B (j) → † in matrices of M . The matrices of the form (X → Y, B (j) → †) are labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, . . . , n1 } are mutually disjoint sets. We construct a system of three simple mobile membranes Π = (V, H, µ, w1 , w2 , w3 , R, 2), where: V = N ∪ {X, Xi,j | X ∈ N1 , 1 ≤ i ≤ n1 , 0 ≤ j ≤ n1 } ∪{Xj | X ∈ N1 , n1 + 1 ≤ j ≤ n1 + n2 } ∪{a, a′ | a ∈ T } ∪ {x | x ∈ (N2 ∪ T )∗ } ∪{A, Ai,j , ai,j | a ∈ T, A ∈ N2 , 1 ≤ i ≤ n1 , 0 ≤ j ≤ n1 } H = {1, 2, 3} µ = {(1, 2); (1; 3)} w2 = XA, where (S → XA) is the initial matrix of G wh = λ, for all h ∈ {1, 3} The set R of rules is constructed as follows: (i) For each (nonterminal) matrix mi : (X → Y, A → x), X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , with 1 ≤ i ≤ n1 , we consider the rules: 1. [X]2 [ ]3 → [[Xi,0 ]2 ]3 (endo) 2. [[A]2 ]3 → [Aj,0 ]2 [ ]3 (exo) 3. [[Xi,k → Xi,k+1 ]2 ]1 , k < i (levol) 4. [[Aj,k → Aj,k+1 ]2 ]1 , k < j (levol) 5. [[Aj,j Xi,i → xY ]2 ]1 (levol) 6. [[Aj,i Xi,i → †]2 ]1 , i < j (levol) 7. [[Aj,j Xi,j → †]2 ]1 , j < i (levol) In the initial configuration, we have the objects X and A corresponding to the initial matrix in membrane 2. To simulate a matrix of the above type we start by applying the endocytosis rule 1, thus replacing X with Xi,0 , followed by the exocytosis rule 2, thus replacing a single A ∈ N2 with Aj,0 . No other A ∈ N2 can be replaced until membrane 2 enters membrane 3. Rule 3 (for X) and rule 4 (for A) are used to increment the second indices of X and A. This is done to check if the indices of X and A are the same, and in this case to rewrite A according to the matrix mi . Once the indices are equal (i=j), rule 5 is applied to complete the simulation of matrix mi . If the indices of X and A are not the same, rule 6 (if the second index of X is lower than the second index of A) or rule 7 (if the second index of X is bigger than the second index of A) is applied, thus leading to an infinite computation (rules 13 and 14). (ii) For each matrix m′i : (X → Y, B (j) → †), X, Y ∈ N1 , A ∈ N2 , where n1 + 1 ≤ i ≤ n1 + n2 , i ∈ labj , j = 1, 2, we consider the rules: 8. [X]2 [ ]3 → [[Xi ]2 ]3 , for i ∈ labj (endo)
2. ENHANCED MOBILE MEMBRANES
127
9. [[Xi B (j) → †]2 ]3 , j = 1, 2 (levol) 10. [[Xi ]2 ]3 → [Y ]2 [ ]3 (exo) The simulation of matrices of type (3) begins by a rule of type 8. This is followed by a rule 9 in case B (j) exists, thus leading to an infinite computation (rules 13 and 14). If no B (j) exists, then rule 10 can be used to send out membrane 2, successfully completing the simulation. (iii) For a terminal matrix mi : (X → a, A → x), X ∈ N1 , a ∈ T , A ∈ N2 , x ∈ T ∗ , where 1 ≤ i ≤ n1 , we use the rules 1-7, where the rule 5 is replaced by the rules: 11. [[aj,j Xi,i → a′ Y ]1 ]2 (levol) 12. [[a′ ]2 ]1 → [a]2 [ ]1 (exo) 13. [[†]2 ]3 → [†]2 [ ]3 (exo) 14. [†]2 [ ]3 → [[†]2 ]3 (endo) Observe that simulation of a type (4) matrix is along similar steps, except that we have an a in place of Y . During the finishing stages of a type (4) simulation, we use rule 11 to replace aj,j by a′ , and then to rewrite it to a when sending the membrane 2 out of the skin membrane, namely membrane 1. 2. Enhanced Mobile Membranes Biological Motivation. The immune system [53] is one of the fascinating inventions of the nature which protects us against billions of bacteria, viruses, and other parasites. It is constantly on the alert, attacking at the first sign of an invasion, most of the times without the knowledge of the host. The immune system is very complex, being composed of several types of cells and proteins which have different jobs to do in fighting foreign invaders. Phagocytes are cells specialized in finding and “eating” bacteria, viruses, and dead or injured body cells. There are three main types of phagocytes: • Granulocytes are the cells which take the first stand during an infection. They attack any invaders in large numbers, and “eat” them. A small part of these kind of cells is specialized in attacking larger parasites such as worms. • Macrophages (“big eaters”) have a slower response time than the granulocytes, but they are larger, live longer, and have far greater capacities. Macrophages also play a key part in alerting the rest of the immune system in case of invaders. Macrophages start out as white blood cells called monocytes. Monocytes which leave the blood stream turn into macrophages. • Dendritic cells are “eater” cells and devour intruders, like the granulocytes and the macrophages. And like the macrophages, the dendritic cells help with the activation of the rest of the immune system. They are also capable of filtering body fluids to clear them of foreign organisms and particles.
128 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
White blood cells called lymphocytes originate in the bone marrow but migrate to parts of the lymphatic system such as the lymph nodes, spleen, and thymus. There are two main types of lymphatic cells: T cells and B cells. The lymphatic system also involves a transportation system - lymph vessels - for transportation and storage of lymphocyte cells within the body. The lymphatic system feeds cells into the body and filters out dead cells and invading organisms such as bacteria. On the surface of each lymphatic cell are receptors which enable them to recognize foreign substances. These receptors are very specialized - each can match only one specific antigen. Once a macrophage phagocytizes a cell, it places portions of its proteins, called T cell epitopes, on the macrophage surface. These surface markers serve as an alarm to other immune cells which then infer the form of the invader. Macrophages engulf bacteria, process them internally, and display antigen parts on the cell surface together with MHC molecules. The MHC molecules (major histocompatibility complex) are molecules which display peptide antigen to T cells [2]. The only known function of dendritic cells is to present antigen to T cells. The mature dendritic cells found in lymphoid tissues are by far the most potent stimulators of naive T cells. Immature dendritic cells persist in the peripheral tissues for variable lengths of time. When an infection occurs, they are stimulated to migrate via the lymphatics to the local lymphoid tissues, where they have a completely different phenotype. The dendritic cells in lymphoid tissue are no longer able to engulf antigens by phagocytosis or by macropinocytosis. However, they now express very high levels of longlived MHC class I and MHC class II molecules; this fact enables them to stably present peptides from proteins acquired from the infecting pathogens. Tissue dendritic cells reaching the end of their life-span without having been activated by infection also travel via the lymphatics to local lymphoid tissue. Because they do not express the appropriate costimulatory molecules, these cells induce tolerance to any self antigens derived from peripheral tissues which they display. The signals which activate tissue dendritic cells to migrate and mature after taking up antigen are clearly of key importance in determining whether an adaptive immune response will be initiated. For example, receptors which recognize lipopolysaccharide (LPS) are found on dendritic cells and macrophages, and these associate with the toll-like signalling receptor TLR4. T cells undergo a selection process which retains those cells whose receptors interact effectively with various self peptide:self MHC ligands (positive selection), and removes those T cells which either cannot participate in such interactions (death by neglect) or recognize a self peptide:self MHC complex so well that they could damage host cells if allowed to mature; such cells are removed by clonal deletion (negative selection). Those T cells which mature and emerge into the periphery have therefore been selected for their ability to recognize self MHC:self peptide complexes without being fully activated
2. ENHANCED MOBILE MEMBRANES
129
by them. Cytotoxic or killer T cells (CD8+) do their work by releasing lymphotoxins, which cause cell lysis. Helper T cells (CD4+) serve as regulators, and trigger various immune responses. The B lymphocyte cell searches for antigen matching its receptors. If it finds such antigen, it connects to it, and inside the B cell a triggering signal is set off. Then a B cell needs proteins produced by helper T cells to become fully activated. When this happens, the B cell starts to divide to produce clones of itself. During this process, two new cell types are created: plasma cells and memory B cells. Plasma cells are specialized in producing a specific protein, called an antibody, which responds to the same antigen which matched the B cell receptor. Antibodies are released from the plasma cell so that they can seek out intruders and help destroy them. Plasma cells produce antibodies at an amazing rate and can release tens of thousands of antibodies per second. When the Y-shaped antibody finds a matching antigen, it attaches to it. The attached antibodies serve as an appetizing coating for eater cells such as the macrophage. Antibodies also neutralize toxins and incapacitate viruses, preventing them from infecting new cells. Each branch of the Yshaped antibody can bind to a different antigen, so while one branch binds to an antigen on one cell, the other branch could bind to another cell - in this way pathogens are gathered into larger groups which are easier to devour by phagocyte cells. Bacteria and other pathogens covered with antibodies are also more likely to be attacked by the proteins from the complement system. The complement system is one of the major mechanisms by which pathogen recognition is converted into an effective host defense against initial infection; it is composed of plasma proteins which lead to a cascade of reactions which occur on the surface of pathogens and generate active components with various effector functions. This activity was said to ’complement’ the antibacterial activity of antibody, hence the name. Memory B Cells are the second cell type produced by the division of B cells. These cells form the basis of the immunological memory which ensures a more rapid and effective response on a second encounter with a pathogen and thereby provides lasting protective immunity. T cells can also produce memory cells with an even longer life span than memory B cells. Unlike memory T cells which can travel to tissues as a result of cell-surface molecules which affect migration and homing, it is thought that memory B cells continue to recirculate through the same secondary lymphoid compartments which contain naive B cells, principally the follicles of spleen, lymph node, and Peyer’s patch. Some memory B cells can also be found in marginal zones, though it is not clear whether these represent a distinct subset of memory B cells. The distinction between the systems of simple mobile membranes and the systems of enhanced mobile membranes is given by three new rules ((b), (e) and (f)) which are inspired by the behaviour of the immune system. The contextual evolution rule (b) states that a membrane may evolve in a certain
130 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
context. The other two rules describe the forced endocytosis and exocytosis; rule (e) is called “enhanced endocytosis”, while (f) is called “enhanced exocytosis”. The multiset of objects u indicates the membrane having the control in each evolution rule. Definition 3.7 ([AC7]). A system of n ≥ 1 enhanced mobile memQ branes is a construct = (V, H, µ, w1 , . . . , wn , R), where
(1) n, V , H, µ, w1 , . . . , wn are as in Definition 3.1; (2) R is a finite set of developmental rules of the following forms: (a) [[w]m [u]h ]k →[[w]m [v]h ]k for h, k, m ∈ H, u ∈ V + , v, w ∈ V ∗ ; contextual evolution These rules are called contextual because the evolution of a multiset of objects u of membrane h is possible only when membrane h is placed in the same region with membrane m (containing a multiset of objects w) and both membranes h and m are placed inside a membrane k. If the multiset of objects w is not specified, then the evolution is allowed only in the context of membrane m placed in the membrane k. (b) [uv]h [v ′ ]m → [w′ [w]h ]m for h, m ∈ H; u ∈ V + , v, v ′ , w, w′ ∈ V ∗ ; endocytosis An elementary membrane labelled h (containing a multiset of objects uv) enters the adjacent membrane labelled m (containing a multiset of objects v ′ ). The labels h and m remain unchanged during this process; however the multisets of objects uv and v ′ are replaced with the multisets of objects w and w′ during the operation. (c) [v ′ [uv]h ]m → [w]h [w′ ]m , for h, m ∈H; u ∈ V + , v, v ′ , w, w′ ∈ V ∗ ; exocytosis An elementary membrane labelled h (containing a multiset of objects uv) is sent out of a membrane labelled m (containing a multisets of objects v ′ ). The labels of the two membranes remain unchanged, but the multisets of objects uv and v ′ are replaced with the multisets of objects w and w′ during the operation. (d) [v]h [uv ′ ]m→[[w]h w′ ]m for h, m∈H, u ∈ V + , v, v ′, w, w′ ∈ V ∗ ; enhanced endocytosis An elementary membrane labelled h (containing a multiset of objects v) is engulfed into the adjacent membrane labelled m (containing a multiset of objects uv ′ ). The labels h and m remain unchanged during the process; however, the multisets of objects uv ′ and v are transformed into the multisets of objects w′ and w during the operation. The effect of this rule is similar to the effect of rule (b); the main difference is that the movement is not controlled by a multiset of objects placed inside the
2. ENHANCED MOBILE MEMBRANES
131
moving membrane h, but by a multiset of objects uv ′ placed inside the membrane m which engulfs membrane h. This means that the membrane which initiates the move is membrane m, and not the membrane h as in rule (b). (e) [uv ′ [v]h ]m→[w]h [w′ ]m for h, m ∈ H, u ∈ V + , v, v ′ , w, w′ ∈ V ∗ ; enhanced exocytosis An elementary membrane labelled h (containing a multiset of objects v) is expelled from a membrane labelled m (containing a multiset of objects uv ′ ). The labels of the two membranes remain unchanged; however, the multisets of objects uv ′ and v evolve into the multisets of objects w′ and w. The effect of this rule is similar to the effect of rule (c); the main difference is that the movement is not controlled by a multiset of objects placed inside the moving membrane h, but by a multiset of objects uv ′ placed inside the membrane m which expels membrane h. This means that the membrane which initiates the move is membrane m, and not the membrane h as in rule (c). The rules from Definition 3.7 are applied according to similar principles as for systems of simple mobile membranes. The multiset u in Definition 3.7 is the one indicating the membrane which initializes the move in the rules of type (b) − (e). 2.1. Modelling Power. We consider some examples from the immune system, and show how systems of enhanced mobile membranes are used to model the given scenarios. The first example illustrates how the dendritic cells protect the human organism against infections. Dendritic cells can engulf bacteria, viruses, and other cells. Once a dendritic cells engulfs a bacterium, it dissolves it and places portions of bacterium proteins on its surface. These surface markers serve as an alarm to other immune cells, namely helper T cells, which then infer the form of the invader. This mechanism makes sensitive the T cells to recognize the antigens or other foreign agents which triggers a reaction from the immune system. Antigens are often found on the surface of bacterium and viruses. In order to simulate the evolution presented in Figure 1, we need first to encode all the component of the immune system into a system of enhanced mobile membranes. This can be realized by associating a membrane to each component, and some objects to the signals, states and parts of molecules. For the steps done by the dendritic cells presented in Figure 1 we use the following encodings: • dendritic cell - [eat]DC An immature dendritic cell is willing to eat any bacterium it encounters, so we translate it into a membrane labelled by DC which has inside an object eat used to engulf the bacterium. Once the dendritic cell matures, the object eat is consumed.
132 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
• bacterium cell - [antigen]bacterium A bacterium cell contains antigen so we simply represent it as a membrane labelled by bacterium containing a single object antigen which contains the information of the bacterium. • lymph node - [ ]lymph node The lymph node is the place where the mature dendritic cells migrate in order to start the immune response, so we translate it into a membrane labelled by lymph node.
Figure 1. Protection Against Infection [53] Using the above obtained membranes we can describe the membrane system as follows (here body stands for the body skin): [[eat]DC [ ]lymph node ]body [antigen]bacterium with the following rules which describe its evolution: * [antigen]bacterium [ ]body → [[antigen]bacterium ]body A bacterium enters through the skin by performing an endocytosis rule in order to infect the body. The bacterium contains an object antigen which represent its signature. * [eat]DC [ ]bacterium → [eat[ ]bacterium ]DC Once an immature dendritic cell is places in the same region with a bacterium, it “eats” the bacterium by performing an enhanced endocytosis rule. Until this moment the bacterium has controlled its own movement; in this step of the evolution the movement becomes controlled by the dendritic cell which eats the bacterium. * [[antigen]bacterium ]DC → [[antigen δ]bacterium ]DC Once the bacterium has entered the dendritic cell, an object δ is created in order to dissolve the membrane bacterium, and the content of the bacterium is released into the dendritic cell. * [antigen]DC [ ]lymph node → [[antigen]DC ]lymph node
2. ENHANCED MOBILE MEMBRANES
133
Once the dendritic cell contains parts of antigen, it enters the lymph node in order to activate a special class of T cells, namely the helper T cells. * [[eat → λ]DC ]lymph node Once the dendritic cell enters the lymph node it matures and the capacity to engulf bacteria disappears, namely the eat object is consumed. Using only these few rules we can simulate the way a bacterium is engulfed and its content is displayed by the eater cell. The proteins produced by helper T cells activate the B cells.
Figure 2. Activation of T Cells and B Cells [53] For the process of activating of helper T cell and B cell we use the following encodings: • helper T cell - [passive]helper T cell A helper T cell is initially passive, so we represent it as a membrane labelled helper T cell in which the object passive is placed. When the cell is activated the object passive is transformed into active. • B cell - [passive]B cell A B cell is initially passive, so we represent it as a membrane labelled B cell in which the object passive is placed. When the cell is activated the object passive is transformed into active. The activation of the helper T cells and B cells is conditioned by the presence in the lymph node of the dendritic cells, and that is why we use the following contextual evolution rules: * [[antigen]DC [passive]helper T cell ]lymph node → [[antigen]DC [active]helper T cell ]lymph node Once the dendritic cell containing parts of antigen enters the lymph node, it activates a special class of T cells, namely the helper T cells.
134 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
This is denoted by changing the object passive to active in helper T cells. * [[passive]B cell [active]helperT cell ]lymph node → [[active]B cell [active]helperT cell ]lymph node Once the helper T cells are activated, the B cells which are in the same region with them are the next cells which are activated. The B cell searches for antigen matching its receptors. If it finds such antigen, then inside the B cell a triggering signal is set off. Using the proteins produced by helper T cells, the B cell starts to divide and produce clones of itself. During this process, two new cell types are created: plasma cells which produce an antibody, and memory cells which are used to “remember” specific intruders. These examples motivate the introduction of the new class of systems of mobile membranes; more exactly, they motivate the new rules and the way they can be used in modelling some biological systems. 2.2. Computational Power. If Π is a P system with enhanced mobile membranes, N(Π) denotes the set of all numbers computed by Π at the end of a successful computation. This is calculated by looking at the cardinality of the objects in a specified output membrane of the system at the end of a successful computation. We denote by NEM Mm (α) the family of all sets of numbers N(Π) generated by P systems Π with enhanced mobile membranes having rules α, where α ⊆ {exo, endo, f endo, f exo}, where m represents the number of membranes in Π. Here endo and exo represent endocytosis and exocytosis, f endo and f exo represent forced endocytosis and forced exocytosis. Theorem 3.8. [CK2] NEM M12 (endo, exo, f endo, f exo) = NRE. Proof. We only prove the assertion NRE ⊆ NEM M12 (endo, exo, f endo, f exo); the other inclusion is based on the fact that, according to Church-Turing thesis, Turing machines or type-0 grammars are able to simulate membrane systems with enhanced mobile membranes. The proof that NRE ⊆ NEM M12 (endo, exo, f endo, f exo) is based on the observation that each set from NRE is the range of a recursive function. Thus, we prove that for each recursively enumerable function f : N → N, there is a membrane system Π with 12 membranes satisfying the following condition: for any arbitrary x ∈ N, the system Π first “generates” a multiset of the form ox1 and halts if and only if f (x) is defined, and in such a situation the result of the computation is f (x). In order to prove this assertion, we consider a register machine with 3 registers, the last one being a special output register which is never decremented. Let there be a program P consisting of h instructions P1 , . . . , Ph which computes f . Let Ph correspond to the instruction HALT and P1 be the first instruction. The input value x is expected to be in register 1 and
2. ENHANCED MOBILE MEMBRANES
135
the output value in register 3. Without loss of generality, we can assume that all registers other than the first one are empty at the beginning of a computation. We can also assume that at the end of the successful computation all registers except the third, where the result of the computation is stored, are empty. We construct a P system Π = (V, H, µ, w0 , . . . , wh , R, I) with enhanced mobile membrane system, where V = {o1 , o2 , , o3 } ∪ {Pk , Pk′ , Pk′′ , Pk′′′ | 1 ≤ k ≤ h} ∪ {ηi , ηi′ , ηi′′ , δi , δi′ | i = 1, 2}, H = {0, I, add, sub, 1, 2, 3, 4, 5, 6, 7, h}, µ = [ [ ] I [ ] add [ [ ] 1 [ ] 2 [ ] 3 [ ] 4 [ ] 5 [ ] 6 ] sub [ [ ] 7 ] h ] 0 , w0 = ∅ = wadd = wsub = wh = w7 = w1 = w4 , wI = s, w2 = η1 , w5 = η2 , w3 = δ1 , w6 = δ2 (1) Generation of the initial content x of register 1: 1. [ s] I [ ] add → [ [ s] I ] add , [ [ s] I ] add → [ so1 ] I [ ] add , (endo, exo), 2. [ [ s] I ] add → [ P1 ] I [ ] add , (exo). The rule 1 can be used for a certain number of times to generate (ox1 ) as the initial contents of register 1. Rule 2 replaces s with the initial instruction P1 , and so we are ready to start the simulation of the register machine. (2) Simulation of an addition rule Pi = (ADD(r), j), with 1 ≤ r ≤ 3, 1 ≤ i < h: 3. [ Pi ] I [ ] add → [ [ Pi ] I ] add , [ [ Pi ] I ] add → [ Pj or ] I [ ] add . Membrane I enters membrane add by an endo rule; the corresponding exo rule replaces Pi with Pj or simulating an ADD instruction. (3) Simulation of a subtraction rule Pi = (SU B(1), j, k), with 1 ≤ i < h. Case 1: Register 1 is non-zero. 4. [ Pi ] I [ ] sub → [ [ Pi ] I ] sub , (endo), 5. [ Pi ] I [ ] 1 → [ [ Pj′ ] I ] 1 , (endo),
6. [ [ o1 ] I ] 1 → [ λ] I [ ] 1 , [ [ Pj′ ] I ] 1 → [ #] I [ ] 1 , (exo), 7. [ [ Pj′ ] I ] sub → [ Pj ] I [ ] sub .
136 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Case 2: Register 1 is zero. 8. [ Pi ] I [ ] 2 → [ [ Pk′ ] I ] 2 , (endo), 9. [ [ ] I η1 ] 2 → [ ] I [ η1′ ] 2 , (fexo),
10. [ ] 2 [ Pk′ ] I → [ Pk′′ [ ] 2 ] I , (fendo), [ Pk′ ] I [ ] 1 → [ [ #] I ] 1 , (endo),
11. [ o1 [ ] 2 ] I → [ #] I [ ] 2 , (fexo), [ ] 3 [ Pk′′ ] I → [ Pk′′′ [ ] 3 ] I , (fendo),
12. [ ] 3 [ η1′ ] 2 → [ η1′′ [ ] 3 ] 2 , (fendo), 13. [ [ δ1 ] 3 ] 2 → [ δ1′ ] 3 [ ] 2 , (exo),
14. [ [ δ1′ ] 3 ] I → [ δ1 ] 3 [ ] I , (exo),
15. [ [ η1′′ ] 2 ] I → [ η1 ] 2 [ ] I , (exo),
16. [ [ Pk′′′ ] I ] sub → [ Pk ] I [ ] sub , (exo). Rules 4-16 simulate a subtraction by using register 1. Subtraction in register 2 can be done similarly, by using membranes 4,5,6 in place of membranes 1,2,3. In order to simulate a SUB instruction, we start by applying rule 4 such that membrane I enters into membrane sub. Let us consider the case when register 1 is not zero. Then rule 5 is used, by which Pi is replaced by Pj′ , and membrane I enters membrane 1. If there is an o1 in membrane I, then by rule 6, the o1 is erased and membrane I comes out of membrane 1; this is followed by rule 7, where Pj′ is replaced by Pj and membrane I is back inside the skin membrane. If there are no o1 ’s in membrane I, then Pj′ is replaced with the trap symbol # giving an infinite computation. If register 1 is zero, rule 8 is used after rule 4. Pi is replaced with Pk′ and membrane I enters membrane 2. The fexo rule 9 is used replacing η1 of membrane 2 with η1′ . Membrane I with Pk′ is again inside membrane sub, and the fendo rule 10 is used, replacing Pk′ with Pk′′ while membrane 2 is engulfed by membrane I. With membrane 2 inside membrane I, two rules happen in parallel: an fexo rule replacing o1 (if any) with the trap symbol # while membrane 2 is expelled from membrane I, and an fexo rule replacing Pk′′ with Pk′′′ while membrane 3 is engulfed by membrane I. If there were no o1 ’s in membrane I, then we have membranes 2,3 inside membrane I, membrane 2 contains η1′ and membrane I contains Pk′′′ . Rule 12 can now be used, and membrane 3 is engulfed by membrane 2, replacing η1′ by η1′′ . Note that this rule is not applicable earlier when membranes 2,3 were adjacent inside membrane sub. This is followed by the exo rule 13 replacing δ1 with δ1′ . Now, the applicable rules are exo rules 14 and 15 by which membranes 2 and 3 go out of membrane I replacing η1′′ and δ1′ with η1 and δ1 respectively. At the end of these, membrane I
2. ENHANCED MOBILE MEMBRANES
137
becomes elementary, and we can use the exo rule 16, replacing Pk′′ with Pk . This completes the simulation of SUB instruction. (4) Halting : 17. [ Ph ] I [ ] h → [ [ λ] I ] h , (endo), 18. [ oi ] I [ ] 7 → [ [ oi ] I ] 7 , i = 1, 2, (endo), 19. [ [ oi ] I ] 7 → [ λ] I [ ] 7 , i = 1, 2, (exo), Ph must be obtained in order to halt the computation. Once we obtain Ph in membrane I, we use rule 17 which erases Ph , and membrane I enters membrane h. Inside membrane h, the symbols o1 , o2 are removed from membrane I. When the system halts, membrane I contains only o3 ’s, namely the contents of register 3. The multiplicity vector of the multiset from a special membrane called output membrane is considered as a result of the computation. Thus, the result of a halting computation consists of all the vectors describing the multiplicity of objects from the output membrane; a non-halting computation provides no output. The set of vectors of natural numbers produced in this way by a system Π is denoted by P s(Π). A computation can produce several vectors, all of them considered in the set P s(Π). The family of all sets P s(Π) generated by systems of degree at most n using rules α ⊆ {exo, endo, f endo, f exo, cevol}, is denoted by P sEM Mn (α). The previous result is presented using this notation and a different proof. Theorem 3.9 ([CK2]). P sEM M12 (endo, exo, f endo, f exo) = P sRE. Proof. Consider a matrix grammar G = (N, T, S, M, F ) with appearance checking in the improved strong binary normal form (N = N1 ∪ N2 ∪ {S, #}), having n1 matrices m1 , . . . , mn1 of types 2 and 4 and n2 matrices of type 3. The initial matrix is m0 : (S → XA). Let B (1) and B (2) be two objects in N2 for which there are rules B (j) → # in matrices of M . The matrices of the form (X → Y, B (j) → #) are labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, 2, . . . , n1 } are mutually disjoint sets. We construct a P system Π = (V, {1, . . . , 12}, µ, w1 , . . . , w12 , R, 7) with V
′ = N1 ∪ N2 ∪ T ∪ {X0,i , A′0,i | X ∈ N1 , A ∈ N2 , 1 ≤ i ≤ n1 } ∪ {Z, #}
∪ {Xj,i , Aj,i | X ∈ N1 , A ∈ N2 , 0 ≤ i, j ≤ n1 } (j)
∪ {Xi , Xj | X ∈ N1 , j ∈ {1, 2}, n1 + 1 ≤ i ≤ n1 + n2 }, µ = [ [ ] 7 [ ] 8 [ ] 9 [ ] 10 [ ] 11 [ [ ] 3 [ ] 5 ] 1 [ [ ] 4 [ ] 6 ] 2 ] 12 , w7 = {XA | m0 : (S → XA)}, wi = ∅, i 6= 7.
138 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
(1) Simulation of a matrix mi : (X → Y, A → x), where 1 ≤ i ≤ n1 . 1. [ X] 7 [ ] 8 → [ [ Xi,i ] 7 ] 8 , [ [ A] 7 ] 8 → [ Aj,j ] 7 [ ] 8 , (endo, exo),
2. [ Xk,i ] 7 [ ] 9 → [ [ Xk−1,i ] 7 ] 9 , [ [ Ak,j ] 7 ] 9 → [ Ak−1,j ] 7 [ ] 9 , k > 0, (endo, exo), ′ 3. [ ] 10 [ X0,i ] 7 → [ X0,i [ ] 10 ] 7 , [ ] 11 [ A0,j ] 7 → [ A′0,j [ ] 11 ] 7 , (fendo),
4. [ ] 10 [ Xk,i ] 7 → [ #[ ] 10 ] 7 , [ ] 11 [ Ak,j ] 7 → [ #[ ] 11 ] 7 , k > 0, (fendo), [ [ A0,j ] 7 ] 9 → [ #] 7 [ ] 9 , (exo),
′ 5. [ X0,i [ ] 10 ] 7 → [ ] 10 [ Y ] 7 , [ A′0,j [ ] 11 ] 7 → [ ] 11 [ x] 7 , (fexo)
By rule 1, membrane 7 enters membrane 8, replacing X ∈ N1 with Xi,i . A symbol A ∈ N2 is replaced with Aj,j , and membrane 7 comes out of membrane 8. The suffixes i, j represent the matrices mi (mj ), 1 ≤ i, j ≤ n1 corresponding to which X, A have a rule. Next, rule 2 is used until Xi,i and Aj,j become X0,i and A0,j , respectively. If i = j, then we have X0,i and A0,j simultaneously in membrane 7. Then rule 3 is used, by which membranes 10 and 11 are engulfed by membrane 7 replacing X0,i and A0,j with ′ and A′ , respectively. This is then followed by rule 5, when X0,i 0,j membranes 10 and 11 are expelled from membrane 7 by fexo rules ′ and A′ replacing X0,i 0,j with Y and x. If i > j, then we obtain A0,j before X0,i . In this case, we have that membrane 7 is inside membrane 9 containing A0,j . Then rule 4 is used, replacing A0,j with #, and an infinite computation will be obtained (rule 13). If j > i, then we obtain X0,i before A0,j . In this case, we reach a membrane with X0,i Ak,j , k > 0 in membrane 7, and membrane 7 is in the skin membrane. Rule 2 cannot be used now, and the only possibility is to use rule 4, which leads to an infinite computation (due to rule 13). Thus, if i = j, then we can correctly simulate a type-2 matrix. (2) Simulation of a matrix m′i : (X → Y, B (j) → #), with j ∈ {1, 2} and n1 + 1 ≤ i ≤ n1 + n2 . (j)
6. [ X] 7 [ ] j → [ [ Xi ] 7 ] j , (endo), (j)
(j)
7. [ ] j+2 [ Xi ] 7 → [ Xi [ ] j+2 ] 7 , [ ] j+4 [ B (j) ] 7 → [ #[ ] j+4 ] 7 , (fendo), (j)
8. [ Xi [ ] j+2 ] 7 → [ ] j+2 [ Yj ] 7 , (fexo),
9. [ Yj [ ] j+4 ] 7 → [ ] j+4 [ Yj ] 7 (fexo), 10. [ [ Yj ] 7 ] j → [ Y ] 7 [ ] j , (exo). To simulate a matrix of type 3, we start with rule 6. Membrane 7 enters either membrane 1 or membrane 2, depending on whether it has the symbol B (1) or B (2) associated to it. Inside membrane j rule 7 is used, by which membrane j + 2 is engulfed by membrane 7, and membrane j + 4 is engulfed by membrane 7 if the symbol B (j) is present. In this case, B (j) is replaced with #. Otherwise, (j) membrane j + 2 is expelled from membrane 7 replacing Xi with
2. ENHANCED MOBILE MEMBRANES
139
Yj . Membrane 7 exits membrane j, replacing Yj by Y , successfully simulating a matrix of type 3. (3) Halting and simulation of mi : (X → λ, A → x) with 1 ≤ i ≤ n1 . We begin with rules 1-5 as before and simulate the matrix (X → Z, A → x) in place of mi , where Z is a new symbol. The special symbol Z is obtained in membrane 7 at the end. 11. [ ] 8 [ Z] 7 → [ λ[ ] 8 ] 7 , (fendo), 12. [ A[ ] 8 ] 7 → [ ] 8 [ #] 7 , A ∈ N2 , (fexo), 13. [ ] 8 [ #] 7 → [ #[ ] 8 ] 7 , [ #[ ] 8 ] 7 → [ ] 8 [ #] 7 , (fendo, fexo). Rule 11 is used to erase the symbol Z while membrane 8 is engulfed by membrane 7. This is followed by rule 12 if there are more symbols A ∈ N2 remaining, in which case they are replaced with the trap symbol #. An infinite computation is obtained if the symbol # is present in membrane 7. If the computation proceeds correctly, then membrane 7 contains a multiset of terminal symbols x ∈ T ∗ such that ψV (x) ∈ P sL(G). Theorem 3.10 ([CK2]). P sEM M3 (cevol) = P sRE, where cevol is the contextual evolution. Proof. Consider a matrix grammar G = (N, T, S, M, F ) in the strong binary normal form. Let there be n1 matrices 1, 2, . . . n1 of type 2 and 4, and n2 matrices 1, 2, . . . , n2 of type 3. We replace the type-4 matrix (X → A → x), x ∈ T ∗ by a matrix (X → Z, A → x), where Z is a new symbol. We construct an enhanced mobile membrane system Π with 3 membranes using only cevol rules as follows: Π = (V, {1, 2, 3}, [ [ ] 2 [ ] 3 ] 1 , w1 , . . . , w3 , R, 3) where V
′ ′′ ′ = N1 ∪ N2 ∪ T ∪ {XA , XA , XA , kA , kA , CX | X ∈ N1 , A ∈ N2 } ′ ′ ∪ {β, η, γ, η , #} ∪ {Y , Yb , Y | Y ∈ N1 },
w1 = ∅, w2 = Xβ, w3 = Aη, (S → XA) is the first matrix.
The rules are given by the following steps: (1) Simulation of a type-2 matrix mi : (X → Y, A → x), 1 ≤ i ≤ n1 . 1. [ X] 2 [ A] 3 → [ YA ] 2 [ A] 3 , [ X] 2 [ α] 3 → [ #] 2 [ α] 3 , α 6= A, 2. [ YA ] 2 [ A] 3 → [ YA ] 2 [ xkA ] 3 , 3. [ β] 2 [ kA ] 3 → [ #] 2 [ kA ] 3 , [ YA ] 2 [ kA ] 3 → [ YA′ ] 2 [ kA ] 3 , ′ 4. [ YA′ ] 2 [ kA ] 3 → [ YA′ ] 2 [ kA ] 3,
′ ′ 5. [ YA′ ] 2 [ kA ] 3 → [ YA′′ ] 2 [ kA ] 3 , [ YA′ ] 2 [ η] 3 → [ #] 2 [ η] 3 , ′ 6. [ YA′′ ] 2 [ η] 3 → [ Y ] 2 [ η] 3 , [ β] 2 [ kA ] 3 → [ β] 2 [ λ] 3 , 7. [ #] 2 [ α] 3 → [ #] 2 [ α] 3 , α ∈ V.
140 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Rule 1 starts the simulation replacing X by YA . Then, a copy of A is replaced by xkA . This is followed by replacing YA with YA′ , ′ . In the presence of k ′ , Y ′ is replaced with Y ′′ . and then kA by kA A A A ′′ ′ is erased. Note that if rule 3 Finally, YA is replaced with Y and kA is used more than once, then an infinite computation is obtained by replacing β with #. Similarly, if YA′ is not replaced with YA′′ using rule 5, then YA′ is replaced with the trap symbol #. This prevents ′ is erased before obtaining Y ′′ . If the above the scenario when kA A rules are used, we obtain Y in membrane 2, and x in membrane 3. (2) Simulation of a type-3 matrix m′i : (X → Y, B (j) → #), 1 ≤ i ≤ n2 . 8. [ X] 2 [ η] 3 → [ Y CX ] 2 [ η] 3 ,
9. [ CX ] 2 [ B (j) ] 3 → [ #] 2 [ B (j) ] 3 , if m′i corresponds to X and B (j) , [ Y ] [ η] → [ Yb ] [ η] , 2
3
2
3
10. [ Yb ] 2 [ η] 3 → [ Yb ] 2 [ η ′ γ] 3 ,
11. [ CX ] 2 [ η ′ ] 3 → [ λ] 2 [ η ′ ] 3 , [ Yb ] 2 [ γ] 3 → [ Y ′ ] 2 [ γ] 3 , 12. [ Y ′ ] 2 [ η ′ ] 3 → [ Y ′ ] 2 [ η] 3 , [ β] 2 [ γ] 3 → [ β] 2 [ λ] 3 , 13. [ 2Y ′ ] 2 [ η] 3 → [ Y ] 2 [ η] 3 , 14. [ Yb ] [ η ′ ] → [ Yb ] [ η ′ ] . 2
3
2
3
The simulation begins with rule 8, replacing X with Y CX . Next, the symbol CX is replaced with # if the symbol B (j) ∈ N2 is present, and in parallel, Y is replaced with Yb . Rule 9 replaces Y with Yb , and rule 10 replaces η with η ′ γ. Next, if CX is still present in membrane 2, then it means that the symbol B (j) ∈ N2 is absent. In this case, CX is erased and Yb is replaced with Y ′ . This is followed by replacing η ′ with η, erasing γ (rule 12), and finally replacing Y ′ with Y (rule 13). Note that if γ is erased before replacing Yb with Y ′ , then an infinite computation is obtained by using rule 14. (3) Halting : Simulation of mi : (X → Z, A → x), 1 ≤ i ≤ n1 . Rules 1-7 are used to obtain Z in membrane 2 and x in membrane 3. 15. [ Z] 2 [ η] 3 → [ Z] 2 [ λ] 3 , 16. [ Z] 2 [ A] 3 → [ #] 2 [ A] 3 , A ∈ N2 . The symbol Z removes η from membrane 3. Then it checks if any symbol A ∈ N2 remains in membrane 3. If it remains, an infinite computation is obtained using rule 7. Clearly, if the system halts, then P sL(G) ⊆ P s(Π). Theorem 3.11 ([CK2]). P sET 0L ⊆ P sEM M8 (endo, exo, f endo, f exo).
2. ENHANCED MOBILE MEMBRANES
141
Proof. Let G = (V, T, ω, R1 , R2 ) be an ET0L system in the normal form. We construct the P system Π = (V ′ , H, µ, w0 , . . . , w7 , R, 0) as follows: V ′ = {†, gi , gi′ , gi′′ , gi′′′ , ji , hi , ki , ji′ , ji′′ , ji′′′ , h′i , h′′i , h′′′ i | i = 1, 2} ∪ V ∪ Vi , H = {0, 1, . . . , 7}, where Vi = {ai | a ∈ V, i = 1, 2} µ = [ [ ] 0 [ ] 1 [ ] 2 [ [ ] 4 [ ] 5 [ ] 6 ] 3 ] 7 , w0 = g1 ω, wi = ∅, i > 0. Simulation of table Ri , i ∈ {1, 2} 1. [ gi ] 0 [ ] i → [ [ gi ] 0 ] i , (endo), 2. [ [ a] 0 ] i → [ vi ] 0 [ ] i , if a → v ∈ Ri , (exo),
3. [ gi ] 0 [ ] 3 → [ [ gi′ ] 0 ] 3 , (endo),
4. [ ] 4 [ gi′ ] 0 → [ gi′′ [ ] 4 ] 0 , [ ] 5 [ a] 0 → [ † [ ] 5 ] 0 , (fendo),
5. [ gi′′ [ ] 4 ] 0 → [ ] 4 [ gi′′′ ] 0 , (fexo), [ † [ ] 5 ] 0 → [ † ] 0 [ ] 5 (fexo),
6. [ ] 5 [ † ] 0 → [ † [ ] 5 ] 0 , (fendo), 7. [ gi′′′ ] 0 [ ] 5 → [ [ gi′′′ ] 0 ] 5 , (endo),
8. [ [ ai ] 0 ] 5 → [ a] 0 [ ] 5 , (exo), 9. [ [ gi′′′ ] 0 ] 5 → [ ji hi ki ] 0 [ ] 5 , (exo), 10. [ ] 4 [ ji ] 0 → [ ji′ [ ] 4 ] 0 , [ ] 5 [ hi ] 0 → [ h′i [ ] 5 ] 0 , [ ] 6 [ ki ] 0 → [ λ[ ] 6 ] 0 , (fendo),
11. [ ji′ [ ] 4 ] 0 → [ ] 4 [ ji′′ h′′i ] 0 , [ h′i [ ] 5 ] 0 → [ ] 5 [ λ] 0 , [ ai [ ] 6 ] 0 → [ ] 6 [ † ] 0 , (fexo),
12. [ ji′′ [ ] 6 ] 0 → [ ] 6 [ ji′′′ ] 0 , (fexo), [ ] 5 [ h′′i ] 0 → [ h′′′ i [ ] 5 ] 0 , (fendo),
13. [ h′′′ i [ ] 5 ] 0 → [ ] 5 [ λ] 0 , (fexo),
14. [ [ j1′′′ ] 0 ] 3 → [ gi ] 0 [ ] 3 , i = 1, 2, [ [ j2′′′ ] 0 ] 3 → [ g1 ] 0 [ ] 3 , (exo),
[ [ j2′′′ ] 0 ] 3 → [ λ] 0 [ ] 3 , (exo), 15. [ † ] 0 [ ] 1 → [ [ † ] 0 ] 1 , (endo), [ [ † ] 0 ] 1 → [ † ] 0 [ ] 1 , (exo).
In the initial membrane, the string g1 ω is in membrane 0, where ω is the axiom, and g1 indicates that table 1 should be simulated first. The simulation begins with rule 1, with membrane 0 entering membrane 1. In membrane 1 the only applicable rule is the exo rule 2, by which the symbols a ∈ V are replaced by v1 corresponding to the rule a → v1 ∈ R1 . Rules 1 and 2 can be repeated until all the symbols a ∈ V are replaced according to a rule in R1 . Finally, if all the symbols a ∈ V have been replaced by rules of R1 , only symbols of V1 and the symbol g1 are in membrane 0. Rule 3 can be used anytime after this; symbol g1 is replaced with g1′ , and membrane 0 enters membrane 3. No rules of Ri can be simulated until membrane 0 goes out of membrane 3. Inside membrane 3, rule 4 is used. Membrane 4 is engulfed by membrane 0 by an fendo rule replacing g1′ with g1′′ ; in parallel, membrane 5 is engulfed by membrane 0 if any symbol a ∈ V is present in membrane 0, i.e., if some symbol a ∈ V has been left out from the simulation using a rule from R1 . The trap symbol † is introduced, and this triggers a never ending
142 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
computation (rules 5 and 6). Next, membrane 4 is expelled from membrane 0 replacing g1′′ with g1′′′ . Membrane 0 now enters membrane 5 using rule 7. Rules 8 and 7 are applied as long as required until all the symbols of V1 are replaced with the corresponding symbols of V . When all the symbols of V1 are replaced with symbols of V , rule 9 can be used to replace g1′′′ with j1 h1 k1 . Now we check if all the symbols of V1 are indeed replaced with symbols of V ; for this we use rule 10. The membranes 4, 5 and 6 are engulfed by membrane 0 in parallel by fendo rules replacing j1 , h1 and k1 with j1′ , h′1 and λ, respectively. Next, by fexo rule 11, membranes 4 and 5 are expelled from membrane 0 replacing j1′ with j1′′ h′′1 and h′1 with λ, respectively. If any symbol of a1 ∈ V1 is present in membrane 0, then membrane 6 is also expelled (fexo rule 11), replacing it with the trap symbol †. In case there are no symbols of V1 in membrane 0, membrane 6 stays inside membrane 0 until we obtain j1′′ h′′1 . Using the fexo rule 12, membrane 6 is expelled replacing j1′′ with j1′′′ , while in parallel, membrane 5 is engulfed by membrane 0 using the fendo rule 12 replacing h′′1 with h′′′ 1 (Note that if we choose to use fendo rule 4 instead of rule 12, the trap symbol † is introduced in membrane 0). Membrane 5 is expelled from membrane 0 erasing h′′′ 1 . Once membrane 0 becomes elementary, it goes out of membrane 3 replacing j1′′′ with gi . If j1′′′ is replaced with g1 , then table 1 is simulated again; otherwise table 2 is simulated. The computation can stop only after simulating table 2. If table 2 is simulated, we obtain j2′′′ at the end of the simulation. j2′′′ is replaced with either (i) g1 , in which case we simulate table 1, or (ii) λ, in which case membrane 0 remains idle inside the skin membrane, in the absence of the trap symbol †. It is clear that P s(Π) contains only the vectors in P s(L(G)). Corollary 3.12 ([CK2]). P sE0L ⊆ P sEM M7 (endo, exo, f endo, f exo). We can interpret the multiset of objects present in the output membrane as a set of strings x such that the multiplicity of symbols in x is the same as the multiplicity of objects in the output membrane. In this way, the multiset of objects in the output membrane generates a language. For a system Π, let L(Π) represent this language (all strings computed by Π), and let LEM Mn (α) represent the family of languages L(Π) generated by systems having ≤ n membranes, using a set of operations α ⊆ {endo, exo, f endo, f exo}. We get the following results. Lemma 3.13 ([CK2]). LEM M8 (endo, exo, f endo, f exo) − ET 0L 6= ∅. / ET 0L [41]. We construct Π = Proof. L={x ∈ {a, b}∗ | |x|b = 2|x|a } ∈ ({a, b, b′ , c, η, η ′ , η ′′ , η ′′′ , †}, {0, . . . , 7}, [ [ ηb] 1 [ [ ] 3 [ ] 4 ] 2 [ [ ] 6 [ ] 7 ] 5 ] 0 , R, 1)
2. ENHANCED MOBILE MEMBRANES
143
with rules as given below to generate L. 1. [ η] 1 [ ] 2 → [ [ η] 1 ] 2 , [ [ b] 1 ] 2 → [ b′ b′ ] 1 [ ] 2 , (endo, exo), [ [ η] 1 ] 2 → [ η] 1 [ ] 2 , (exo),
2. [ η] 1 [ ] 2 → [ [ η ′ ac] 1 ] 2 , (endo),
3. [ ] 3 [ c] 1 → [ [ ] 3 λ] 1 , [ ] 4 [ η ′ ] 1 → [ [ ] 4 η ′ ] 1 , (fendo),
4. [ b[ ] 3 ] 1 → [ ] 3 [ † ] 1 , [ η ′ [ ] 4 ] 1 → [ ] 4 [ η ′′ ] 1 , (fexo),
5. [ η ′′ [ ] 3 ] 1 → [ ] 3 [ η ′′ ] 1 , (fexo),
6. [ η ′′ ] 1 [ ] 4 → [ [ η ′′ ] 1 ] 4 , [ [ b′ ] 1 ] 4 → [ b] 1 [ ] 4 , (endo, exo),
7. [ [ η ′′ ] 1 ] 4 → [ η ′′′ ] 1 [ ] 4 , (exo),
8. [ [ η ′′′ ] 1 ] 2 → [ η ′′′ ] 1 [ ] 2 , (exo),
9. [ η ′′′ ] 1 [ ] 5 → [ [ η ′′′ ] 1 ] 5 , (endo),
10. 11. 12. 13. 14.
[ [ [ [ [
] 6 [ b′ ] 1 → [ † [ ] 6 ] 1 , [ ] 7 [ η ′′′ ] 1 → [ η[ ] 7 ] 1 , (fendo), † [ ] 6 ] 1 → [ ] 6 [ † ] 1 , [ η[ ] 7 ] 1 → [ ] 7 [ η] 1 , (fexo), [ η] 1 ] 5 → [ λ] 1 [ ] 5 , (exo), η[ ] 5 ] 1 → [ ] 5 [ η] 1 , (fexo), † ] 1 [ ] 2 → [ [ † ] 1 ] 2 , (endo), [ [ † ] 1 ] 2 → [ † ] 1 [ ] 2 , (exo).
The system works as follows: Rule 1 is used to replace every b with b′ b′ . Rule 2 can be used at any moment to replace η (guessing that all b’s have been replaced). The rules 3-5 check that every b has been replaced with b′ b′ , and then all the b′ are replaced with b by rules 6-8. Next, rules 9-11 check that all b′ are replaced with b. The computation can halt using rule 12, and can continue using rule 13. An infinite computation is obtained using rule 14 when (i) membrane 1 enters membrane 2 using rule 2 before replacing all b’s, or (ii) membrane 1 enters membrane 5 before replacing all the b′ . It is easy to see that membrane 1 contains strings of L at the end of a halting computation. Theorem 3.14 ([CK2]). P sEM M3 (endo, exo) = P sEM M3 (f endo, f exo). Proof. Consider a P system Π with 3 membranes having endo, exo rules. Let the initial membrane be µ = [ [ w1 ] 1 [ w2 ] 2 ] 3 . It is easy to construct a P system Π′ using only f endo, f exo rules having initial membrane structure µ′ = µ as follows: For every endo rule [ a] i [ ] j → [ [ wa ] i ] j in Π, add the fendo rule [ ] j [ a] i → [ [ ] j wa ] i , and for every exo rule [ [ a] i ] j → [ wa ] i [ ] j , add a fexo rule [ [ ] j a] i → [ ] j [ wa ] i in Π′ . Note that the computation starts in Π using an endo rule only, so in Π′ we can start with the corresponding fendo rule. If the initial membrane structure of Π was µ = [ [ w2 [ w1 ] 1 ] 2 ] 3 , then the first applicable rule is an exo rule. In this case, construct Π′ = Πα with initial membrane structure µ′ = [ [ α] 1 [ w2 ] 2 ] 3 where α is a string dependent on w1 and the set of exo rules applicable
144 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
to w1 in Π. After this preprocessing step, the set of fendo, fexo rules are constructed similarly as above. Theorem 3.15 ([CK2]). P sEM M3 (endo, exo, f endo, f exo)⊆P sM AT . Proof. Consider a P system Π = (V, H, µ, w1 , w2 , w3 , R, i). Without loss of generality, assume that H = {1, 2, 3}, µ = [ [ ] 2 [ ] 3 ] 1 and that membrane 2 is the output membrane. Since the symbols of membrane 1 play no role in any of the rules, we assume that w1 = ∅. Construct a matrix grammar G = (N, T, S, M ) without appearance checking, where N = V2 ∪ V3 ∪ V2 ∪ V3 ∪ {f, t, f2in3 , f3in2 , t2in3 , t3in2 , h} and T = V2 . The matrices m ∈ M are as follows: 1. (S → f w2 w3 ), 2. (f → f2in3 , a2 → w2 ), if endo rule [ a] 2 [ ] 3 → [ [ w] 2 ] 3 , (f → f3in2 , a2 → w2 ), if fendo rule [ ] 3 [ a] 2 → [ w[ ] 3 ] 2 , (f → f3in2 , a3 → w3 ), if endo rule [ a] 3 [ ] 2 → [ [ w] 3 ] 2 , (f → f2in3 , a3 → w3 ), if fendo rule [ ] 2 [ a] 3 → [ w[ ] 2 ] 3 , 3. (f2in3 → f, a2 → w2 ), if exo rule [ [ a] 2 ] 3 → [ w] 2 [ ] 3 , (f2in3 → f, a3 → w3 ), if fexo rule [ a[ ] 2 ] 3 → [ ] 2 [ w] 3 , (f3in2 → f, a3 → w3 ), if exo rule [ [ a] 3 ] 2 → [ w] 3 [ ] 2 , (f3in2 → f, a2 → w2 ), if fexo rule [ a[ ] 3 ] 2 → [ ] 3 [ w] 2 , 4. (f → t), (t → t, a2 → a2 ), if no endo rules [ a] 2 [ ] 3 → [ [ w] 2 ] 3 and no fendo rules [ ] 3 [ a] 2 → [ w[ ] 3 ] 2 , (t → t, a3 → a3 ), if no endo rules [ a] 3 [ ] 2 → [ [ w] 3 ] 2 and no fendo rules [ ] 2 [ a] 3 → [ w[ ] 2 ] 3 , (t → h), 5. (f2in3 → t2in3 ), (t2in3 → t2in3 , a3 → a3 ), if no fexo rule [ [ ] 2 a] 3 → [ ] 2 [ w] 3 , (t2in3 → t2in3 , a2 → a2 ), if no exo rule [ [ a] 2 ] 3 → [ w] 2 [ ] 3 , (t2in3 → h), 6. (f3in2 → t3in2 ), (t3in2 → t3in2 , a3 → a3 ), if no exo rule [ [ a] 3 ] 2 → [ w] 3 [ ] 2 , (t3in2 → t3in2 , a2 → a2 ), if no fexo rule [ a[ ] 3 ] 2 → [ ] 3 [ w] 2 , (t3in2 → h), 7. (h → h, a3 → λ), (h → λ). The simulation of Π starts with the first matrix, where the start symbol S is replaced with the string f w2 w3 ; f is a symbol to keep track of the structure of µ at the current point of computation, w2 , w3 are the initial contents of
2. ENHANCED MOBILE MEMBRANES
145
membranes 2,3. If the first rule to be applied is an endo rule which takes membrane 2 inside membrane 3, then by rule 2 f is replaced with f2in3 , while the symbol a2 which is rewritten by the endo rule is replaced with w2 . Similarly, if the first rule happens to be a fendo rule with membrane 2 engulfed by membrane 3, then f is replaced with f2in3 while a symbol a3 is replaced with w3 according to the fendo rule. Similar to rule 2, rule 3 changes the status symbol f accordingly, and a symbol a2 or a3 is replaced. Rules 2,3 can be used as long as there are applicable rules in membranes 2,3. To terminate the computation, we use rules 4-7. Rule 4 guesses that there are no more applicable rules when membranes 2 and 3 are adjacent. The symbol f is replaced with t; this is followed by replacing symbols of Vi (i = 2, 3) with the corresponding symbols Vi if there are no applicable rules in Π. t is then replaced with h. Similarly, rules 5 and 6 guesses that a successful computation has been reached when membrane 2 is inside membrane 3, and when membrane 3 is inside membrane 2, respectively. As in rule 4, symbols of V2 , V3 are replaced with symbols of V2 , V3 . Rule 7 erases all symbols of V3 , and then erases h. Thus, if only symbols of V2 remain, then a terminal string is obtained. It is easy to see that a terminal string is obtained iff a halting computation is obtained in Π. The terminal string generated by G clearly is the multiset of the contents of the output membrane at the end of a halting computation. Lemma 3.16 ([AC11]). • P sEM Mn (endo, exo, f endo, f exo) ⊆ P sEM M∗ (endo, exo, f endo, f exo) ⊆ P sRE, for all n ≥ 1. • P sEM Mn (cevol) ⊆ P sEM M∗ (cevol) ⊆ P sRE, for all n ≥ 1. When proving the result of Theorem 3.9 the authors have not used an optimal construction of a membrane system. In what follows we have proven that using the same types of rules (endo, exo, f endo, f exo) we can construct a membrane system using only nine membranes instead of twelve membranes. If this is an optimal construction remains an open problem. Theorem 3.17 ([AC11]). P sEM M9 (endo, exo, f endo, f exo) = P sRE. Proof. From Lemma 3.16 we have that P sEM M9 (endo, exo, f endo, f exo) ⊆ P sRE, so in what follows we prove that P sRE ⊆ P sEM M9 (endo, exo, f endo, f exo). Consider a matrix grammar G = (N, T, S, M, F ) in the improved strong binary normal form (hence with N = N1 ∪ N2 ∪ {S; †}), having n1 matrices m1 , . . . , mn1 of types (2) and (4) (that is, not used in the appearance checking mode), and n2 matrices of type (3) (with appearance checking rules). The initial matrix is m0 : (S → XA). Let B (1) and B (2) be the two objects in N2 for which we have rules B (j) → † in matrices of M . The matrices of the form (X → Y, B (j) → †) are labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with
146 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, 2, . . . , n1 } are mutually disjoint sets. We construct a system of nine enhanced mobile membranes Π = (V, H, µ, w1 , . . . , w9 , R, 7), where: ′ , A′ | X ∈ N , A ∈ N , 1 ≤ i ≤ n } V = N ∪ T ∪ {X0i 1 2 1 0i ∪{Xji , Aji | 0 ≤ i, j ≤ n1 } (j) ∪{Xi , Xj | X ∈ N1 , j ∈ {1, 2}, n1 + 1 ≤ i ≤ n1 + n2 } H = {1, . . . , 9} µ = {(1, 2); (1, 7); (1, 8); (1, 9); (2, 3); (2, 4); (2, 5); (2, 6)} w7 = XA, where (S → XA) is the initial matrix of G wh = λ, for all h ∈ {1, . . . , 9}\{7} The set R of rules is constructed as follows: (i) For each (nonterminal) matrix mi : (X → Y, A → x), X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , with 1 ≤ i ≤ n1 , we consider the rules: 1. [X]7 [ ]8 → [[Xi,i ]7 ]8 (endo) 2. [[A]7 ]8 → [Aj,j ]7 [ ]8 (exo) 3. [Xk,i ]7 [ ]9 → [[Xk−1,i ]7 ]9 (endo) 4. [[Ak,j ]7 ]9 → [Ak−1,j ]7 [ ]9 (exo) ′ [ ] ] (fendo) 5. [ ]8 [X0,i ]7 → [X0,i 8 7 6. [ ]9 [A0,j ]7 → [A′0,j [ ]9 ]7 (fendo) 7. [ ]8 [X0,i ]7 → [†[ ]8 ]7 (fendo) 8. [[A0,j ]7 ]9 → [†]7 [ ]9 (exo) ′ [ ] ] → [ ] [Y ] (fexo) 9. [X0,i 8 7 8 7 ′ 10. [A0,j [ ]9 ]7 → [ ]9 [x]7 (fexo) In the initial configuration, we have the objects X, A corresponding to the initial matrix in membrane 7. To simulate a matrix of type (2), we start by applying the endocytosis rule 1, thus replacing X with Xi,i , followed by the exocytosis rule 2, thus replacing a single A ∈ N2 with Aj,j . Rule 3 (for X) and rule 4 (for A) are used to decrement the first index of X and A. This is done to check if the indices of X and A are the same (i=j), and in this case to rewrite A according to the matrix mi . By using fendo rules 5 and 6, membranes 8 and 9 enter membrane 7 replacing X0,i and ′ and A′ , respectively. This is then followed by rules A0,j with X0,i 0,j 9 and 10, when membranes 8 and 9 exit membrane 7 by fexo rules ′ and A′ replacing X0,i 0,j with Y and x, respectively. If i > j, then we obtain A0,j before X0,i . In this case, we have a configuration where membrane 7 is inside membrane 9 containing A0,j . Then rule 8 is used, replacing A0,j with †, and an infinite computation is obtained (rule 17). If j > i, then we obtain X0,i before A0,j . In this case, we reach a configuration with X0,i Ak,j , k > 0 in membrane 7, and membrane 7 is in the skin membrane. Rule 3 cannot be used now, and the only possibility is to use rule 7, which leads
3. MUTUAL MOBILE MEMBRANES
147
to an infinite computation (rule 17). Thus, if i = j, then we can correctly simulate a matrix of type (2). (ii) For each matrix m′i : (X → Y, B (j) → †), X, Y ∈ N1 , A ∈ N2 , where n1 + 1 ≤ i ≤ n1 + n2 , i ∈ labj , j = 1, 2, we consider the rules: (j) 11. [X]7 [ ]2 → [[Xi ]7 ]2 , j = 1, 2 (endo) (j) (j) 12. [ ]j+2 [Xi ]7 → [Xi [ ]j+2 ]7 , j = 1, 2 (fendo) 13. [ ]j+4 [B (j) ]7 → [†[ ]j+4 ]7 , j = 1, 2 (fendo) (j) 14. [Xi [ ]j+2 ]7 → [ ]j+2 [Yj ]7 , j = 1, 2 (fexo) 15. [[Yj ]7 ]2 → [Y ]7 [ ]2 , j = 1, 2 (exo) The simulation of matrices of type (3) begins by a rule of type 11. Inside membrane 2, rules 12 and 13 are used, and so membrane (j + 2) enters membrane 7, and membrane (j + 4) enters membrane 7 if the symbol B (j) is present. In this case, B (j) is replaced with † leading to an infinite computation (rule 17). Otherwise, mem(j) brane (j + 2) comes out of the membrane 7 replacing Xi with Yj . Then membrane 7 exits membrane 2, by replacing Yj with Y thus successfully simulating a matrix of type (3). (iii) For a terminal matrix mi : (X → a, A → x), X ∈ N1 , a ∈ T , A ∈ N2 , x ∈ T ∗ , where 1 ≤ i ≤ n1 : 16. [[a′ ]7 ]1 → [a]7 [ ]1 (exo) 17. [ ]8 [†]7 → [†[ ]8 ]7 (fendo) [†[ ]8 ]7 → [ ]8 [†]7 (fexo) Observe that simulation of a matrix of type (4) matrix is similar to that of a matrix of type (2), except that we have an a′ in place of Y in rule 9. During the finishing stages of a matrix of type (4) simulation, we use rule 16 to replace a′ with a when sending the membrane 7 out of the skin membrane.
3. Mutual Mobile Membranes 3.1. Biological Motivation. In a receptor-mediated endocytosis (Figure 3) a cell takes in a particle of low-density lipoprotein (LDL) from the outside as presented on http://bcs.whfreeman.com/thelifewire/. To do this, the cell uses receptors which specifically recognize and bind to the LDL particle. The receptors are clustered together. An LDL particle contains one thousand or more cholesterol molecules at its core. A monolayer of phospholipid surrounds the cholesterol and its embedded with proteins called apo-B. These apo-B proteins are specifically recognized by receptors in the cell membrane. The receptors in the coated pit bind to the apo-B proteins on the LDL particle. The pit is reenforced by a lattice like network of proteins called clathrin. Additional clathrin molecules then add to the lattice which eventually pinches off apart from the membranes.
148 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Figure 3. Receptor-Mediated Endocytosis Exocytosis is the movement of materials out of a cell via membranous vesicles (Figure 4). These processes allow patches of membrane to flow from compartment to compartment, and require us to think of a cell as a dynamic structure. SNARES (Soluble NSF (N-ethylmaleimide Sensitive Factor) Attachment Protein Receptor)) located on the vesicles (v-SNARES) and on the target membranes (t-SNARES) interact to form a stable complex which holds the vesicle very close to the target membrane.
Figure 4. SNARE-Mediated Exocytosis Starting from these biological examples we see the necessity to introduce new rules. The rules of systems of enhanced mobile membranes allow a membrane to enter, exit, to engulf or to push out another membrane. The second membrane just undergoes the movement. No permission is required from the second membrane which may not even be aware that a movement involving it has taken place. We propose a modification of rules from Definition 3.7. We define a new variant of systems of mobile membranes, namely the systems of mutual mobile membranes, originally introduced in [AC10]
3. MUTUAL MOBILE MEMBRANES
149
as systems of safe enhanced mobile membranes. In systems of mutual mobile membranes, any movement takes place only if both the involved membranes agree on the movement. This can be described by means of objects a and co-objects a present in the membranes involved in such a movement. Since we have the equality a = a, we have that mutual endocytosis is the same as mutual enhanced endocytosis and mutual exocytosis is the same as mutual enhanced exocytosis. Definition 3.18 Q ([AC12]). A system of n ≥ 1 mutual mobile membranes is a construct = (V, H, µ, w1 , . . . , wn , R), where (1) n, V , H, µ, w1 , . . . , wn are as in Definition 3.1 (2) R is a finite set of developmental rules of the following forms: (a) [uv]h [uv ′ ]m → [ [w]h w′ ]m for h, m ∈ H, u, u ∈ V + , v, v ′ , w, w′∈ V ∗; mutual endocytosis An elementary membrane labelled h (containing a multiset of objects uv) enters the adjacent membrane labelled m (containing a multiset of objects uv ′ ). The labels h and m remain unchanged during this process; however the multisets of objects uv and uv ′ are replaced with the multisets of objects w and w′ during the operation. (b) [uv ′ [uv]h ]m → [w]h [w′ ]m for h, m ∈ H, u, u ∈ V + , v, v ′ , w, w′ ∈ V ∗; mutual exocytosis An elementary membrane labelled h (containing a multiset of objects uv) exits a membrane labelled m (containing a multiset of objects uv ′ ). The labels of the two membranes remain unchanged, but the multisets of objects uv and uv ′ are replaced with the multisets of objects w and w′ during the operation. The rules from Definition 3.18 are applied according to similar principles as for systems of simple mobile membranes. The multiset of objects u in Definition 3.18 is the one indicating the membrane which initializes the move in the rules of type (a) − (b), while the multiset of objects u from Definition 3.18 denotes the membrane which accepts the movement. 3.2. Computational Power. The family of all sets P s(Π) generated inside an output membrane by systems of n mutual mobile membranes using mutual endocytosis rules (mendo) and mutual exocytosis rules (mexo) is denoted by P sM M Mn (mendo, mexo). We denote by P sRE the family of Turing computable sets of vectors generated by arbitrary grammars. In systems of simple mobile membranes with local evolution rules and mobility rules we have that systems of degree three have the same power as a Turing machine (Theorem 3.6), while in systems of enhanced mobile membranes using only mobility rules the degree of systems having the same power as a Turing machine increases to nine (Theorem 3.17). We notice
150 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
that in each mobility rule from systems of simple and enhanced mobile membranes, in the left hand side of the rules only one object appears in the proofs. By using multisets instead of objects and synchronization by objects and co-objects, we prove that it is enough to consider only systems of three mutual mobile membranes together with the operations of mutual endocytosis and mutual exocytosis to get the full computational power of a Turing machine. The proof is done in a similar manner to the proof for the systems of enhanced mobile membranes [CK1]. Lemma 3.19 ([AC12]). P sM M Mn (mendo, mexo) ⊆ P sEM M∗ (mendo, mexo) ⊆ P sRE, for all n ≥ 1. Theorem 3.20 ([AC12]). P sM M M3 (mendo, mexo) = P sRE. Proof. From Lemma 3.19 we have that P sM M M3 (mendo, mexo) ⊆ P sRE, so in what follows we prove that P sRE ⊆ P sM M M3 (mendo, mexo). It is proved in [41] that each recursively enumerable language can be generated by a matrix grammar in the strong binary normal form. We consider a matrix grammar G = (N, T, S, M, F ) in the strong binary normal form, having n1 matrices m1 , . . . , mn1 of types 2 and 4, and n2 matrices of type 3 labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, 2, . . . , n1 } are mutually disjoint sets. The initial matrix is m0 : (S → XA). We construct a system of three mutual mobile membrane Π = (V, H, µ, w1 , w2 , w3 , R, 2), where ′ , A′ | X ∈ N , A ∈ N , 1 ≤ i ≤ n } ∪ {α, α, β, β} V = N ∪ T ∪ {X0i 1 2 1 0i ∪{Xji , Aji | 0 ≤ i, j ≤ n1 } (j) ∪{Xi , Yj | X ∈ N1 , j ∈ {1, 2}, n1 + 1 ≤ i ≤ n1 + n2 } H = {1, 2, 3} µ = {(1, 2); (1, 3)} w1 = ∅ w2 = α αj ββj XA w3 = ααj β β j , j = 1, 2, where (S → XA) is the initial matrix and the set R constructed as follows: (i) For each (nonterminal) matrix mi : (X → Y, A → x), X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , with 1 ≤ i ≤ n1 , we consider the rules: 1. [Xβ]2 [β]3 → [[Xii β]2 β]3 (mendo) [[Aβ]2 β]3 → [Ajj β]2 [β]3 (mexo) 2. [Xki β]2 [β]3 → [[Xk−1i β]2 β]3 , k > 0 (mendo) [[Akj β]2 ]β]3 → [Ak−1j β]2 [β]3 , k > 0 (mexo) ′ A′ β] β] (mendo) 3. [X0i A0j β]2 [β]3 → [[X0i 0j 2 3 ′ A′ β] β] → [Y xβ] [β] (mexo) [[X0i 2 3 0j 2 3 4. [[Xki A0j β]2 β]3 → [†β]2 [β]3 , k > 0 (mexo) 5. [X0i Akj β]2 [β]3 → [[†β]2 β]3 , k > 0 (mendo) By rule 1, membrane 2 enters membrane 3, replacing X ∈ N1 with Xii . A symbol A ∈ N2 is replaced with Ajj , and membrane
3. MUTUAL MOBILE MEMBRANES
151
2 comes out of membrane 3. The subscripts represent the matrices mi (mj ), 1 ≤ i, j ≤ n1 corresponding to which X, A have a rule. Next, rule 2 is used until Xii and Ajj become X0i and A0j , respectively. If i = j, then we have X0i and A0j simultaneously in membrane 2. Then rule 3 is used, by which membrane 2 enters ′ and A′ , while X ′ and membrane 3 replacing X0i and A0j with X0i 0j 0i A′0j are replaced with Y and x when membrane 2 exits membrane 3. If i > j, then we obtain A0j before X0i . In this case, we have a configuration where membrane 2 is inside membrane 3 containing A0j . Rule 2 cannot be used now, and the only possibility is to use rule 4, replacing Xki and A0j with †, which leads to an infinite computation (due to rule 12). If j > i, then we obtain X0i before A0j . In this case, we reach a configuration with X0i Akj , k > 0 in membrane 2, and membrane 2 is in the skin membrane. Rule 2 cannot be used now, and the only possibility is to use rule 5, replacing X0i and Akj with †, which leads to an infinite computation (rule 12). In this way, we correctly simulate a type-2 matrix whenever i = j. (ii) For each matrix m′i : (X → Y, B (j) → †), X, Y ∈ N1 , B (j) ∈ N2 , where n1 + 1 ≤ i ≤ n1 + n2 , i ∈ labj , j = 1, 2, we consider the rules: (j) 6. [Xβj ]2 [β j ]3 → [[Xi βj ]2 β j ]3 (mendo) (j)
(j)
[[Xi βj ]2 β j ]3 → [Xi βj ]2 [β j ]3 (mexo) 7. [B (j) βj ]2 [β j ]3 → [[†βj ]2 β j ]3 (mexo) (j)
8. [Xi βj ]2 [β j ]3 → [[Yj βj ]2 β j ]3 (medo) 9. [[Yj βj ]2 β j ]3 → [Y βj ]2 [β j ]3 (mexo) Membrane 2 enters membrane 3 by rule 6, and creates an object (j) Xi depending whether it has the symbol B (j) , j = 1, 2 associated with it, and then exits with the newly created object. Next, by rule 7, membrane 2 enters membrane 3 if the object B (j) is present, replacing it with †. If this rule is applied, membrane 2 exits membrane 3 by applying rule 12. Regardless the existence of object B (j) , (j) membrane 2 enters membrane 3 replacing Xi with Yj . Membrane 2 exits membrane 3, replacing Yj with Y , successfully simulating a matrix of type 3. (iii) For a terminal matrix mi : (X → λ, A → x), X ∈ N1 , A ∈ N2 , x ∈ T ∗ , where 1 ≤ i ≤ n1 . We begin with rules 1-5 as before and simulate the matrix (X → Z, A → x) in place of mi , where Z is a new symbol. 10. [α]3 [Zα]2 → [λα[α]3 ]2 (mendo) 11. [Aα[α]3 ]2 → [α]3 [†α] (mexo) 12. [†β]2 [β]3 → [[†β]2 β]3 (mendo) [[†β]2 β]3 → [†β]2 [β]3 (mexo) Now we use rule 10 to erase the symbol Z while membrane 3 enters membrane 2. This is followed by rule 11 if there are any more
152 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
symbols A ∈ N2 remaining, in which case they are replaced with the trap symbol †. An infinite computation is obtained if the symbol † is present in membrane 2. It is clear that if the computation proceeds correctly, then membrane 2 contains a multiset of terminal symbols x ∈ T ∗ . In this way we can conclude that P s(Π) is in P sRE. It is worth noting that three is the smallest number of membranes when using effectively the movement of membranes given by endocytosis and exocytosis. With respect to the optimality of the systems, only the number of objects and rules could be reduced. Usually, when studying the computational power of a membrane system a simulation of a matrix grammar is performed. Another method used is to consider the simulation of a register machine. Following this approach, we prove that the family NRE of all sets of natural numbers generated by arbitrary grammars is the same as the family NM M M3 (mendo, mexo) of all sets of natural numbers generated by systems of three mutual mobile membranes using mendo and mexo rules. This is calculated by looking at the cardinality of the objects in a specified output membrane of the systems of mutual mobile membranes at the end of a halting configuration. Theorem 3.21 ([AC12]). NM M M3 (mendo, mexo) = NRE. Proof. We only prove the assertion NRE ⊆ NM M3 (mendo, mexo), and infer the other inclusion from the Church-Turing thesis. The proof is based on the observation that each set from NRE is the range of a recursive function. Thus, we prove that for each recursive enumerable function f : N → N, there is a system of mutual mobile membranes Π of degree three satisfying the following conditions: for any arbitrary x ∈ N, the system first “generates” a multiset of the form ox1 and halts if and only if f (x) is defined, and, if so, the result is f (x). In order to prove the assertion, we consider a register machine with three registers, the last one being a special output register which is never decremented. Let there be a program P consisting of h instructions P1 , . . . , Ph which computes f . Let Ph correspond to the instruction HALT and P1 be the first instruction. The input value x is expected to be in register 1 and the output value in register 3. Without loss of generality, we can assume that all registers, excepting the first one, are empty at the beginning of the computation, and in the halting configuration all registers except the third one are empty. We construct a system of three mutual mobile membranes Π = (V, H, µ, w0 , wI , wop , R, I): V = {s} ∪ {or | 1 ≤ r ≤ 3} ∪ {Pk , Pk′ | 1 ≤ k ≤ h} ∪ {β, β, γ, γ} ∪{βr | 1 ≤ r ≤ 2} H = {0, I, op} µ = {(0, I); (0, op)} wI = sβγ w0 = ∅ wop = β γ (i) Generation of the initial contents x of register 1:
3. MUTUAL MOBILE MEMBRANES
153
1. [sβ]I [β]op → [[sβ]I β]op (mendo) [[sβ]I β]op → [so1 β]I [β]op (mexo) 2. [[sβ]I β]op → [P1 β]I [β]op (mexo) The rule 1 can be used for any number of times, generating a number x (ox1 ) as the initial content of register 1. Rule 2 replaces s with the initial instruction P1 , and we are ready for the simulation of the register machine. (ii) Simulation of an add rule Pi = (ADD(r), j), 1 ≤ r ≤ 3, 1 ≤ i < h, 1≤j≤h 3. [Pi β]I [β]op → [[Pi β]I β]op (mendo) 4. [[Pi β]I β]op → [Pj or β]I [β]op (mexo) Membrane I enters membrane op using rule 3, and then exits it by replacing Pi with Pj or (rule 4), thus simulating an add instruction. (iii) Simulation of a subtract rule Pi = (SU B(r), j, k), 1 ≤ r ≤ 3, 1 ≤ i < h, 1 ≤ j, k ≤ h 5. [[Pi β]I β]op → [Pj′ βr β]I [β]op (mexo) 6. [or βr β]I [β]op → [[β]I β]op (mendo), otherwise [Pj′ βr β]I [β]op → [[Pk′ β]I β]op (mendo) 7. [[Pj′ β]I β]op → [Pj β]I [β]op (mexo) [[Pk′ β]I β]op → [Pk β]I [β]op (mexo) To simulate a subtract instruction, we start with rule 3, with membrane I entering membrane op. Then rule 5 is used, by which Pi is replaced with Pj′ βr , and membrane I exit membrane op. The newly created object βr denotes the register which has to be decreased. If there is an or in membrane I, then by rule 6 the object or is erased together with βr , and membrane I enters membrane op. This is followed by rule 7, where Pj′ is replaced with Pj and membrane I is back inside the skin membrane. If there are no or ’s in membrane I, then by applying rule 6, Pj′ together with βr are replaced by Pk′ . This is followed by rule 7, where Pk′ is replaced with Pk and membrane I is inside the skin membrane, thus simulating a subtract instruction. (iv) Halting: 8. [γ]op [Ph γ]I → [[γ]op γ]I (mendo) To halt the computation, the halt instruction Ph must be obtained. Once we obtain Ph in membrane I, membrane op enters membrane I and the computation stops (rule 8). When system halts, membrane I contains only o3 ’s, the content of register 3. This result reveals a different technique in proving the computational power of systems of three mutual mobile membranes.
154 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
There are many families of languages included in RE, e.g. P sET 0L ⊂ RE [99]. Using Theorem 3.20 we have that P sET 0L ⊂ P sM M3 (mendo, mexo), but the sets of rules used in the simulation can differ. We exemplify this aspect by an effective construction of a system of three mutual mobile membranes able to simulate an ET0L system in the normal form. In order to get the power of an ET0L system by using the operations of mutual endocytosis and mutual exocytosis, we need only three membranes. Proposition 3.22 ([AC12]). P sET 0L ⊂ P sM M3 (mendo, mexo). Proof. In what follows, we use the following normal form: each language L ∈ ET 0L can be generated by G = (V, T, ω, R1 , R2 ). Moreover, from [98], any derivation starts by several steps of R1 , then R2 is used exactly once, and the process is iterated; the derivation ends by using R2 . Let G = (V, T, ω, R1 , R2 ) be an ET0L system in the normal form. We construct the system of three mutual mobile membranes Π = (V ′ , H, µ, w0 , w1 , w2 , R, 0) as follows: V ′ = {†, α, α, β, β} ∪ {βi , β i | i = 1, 2} ∪ V ∪ Vi , where Vi = {ai | a ∈ V }, i = 1, 2 H = {0, 1, 2} µ = {(2, 0); (2, 1)} w0 = ωαβ1 β w1 = αββ i Simulation of table Ri , i = 1, 2 [βi ]0 [βi ]1 → [[βi ]0 β i ]1 (mendo) [[aβi ]0 β i ]1 → [wi βi ]0 [β i ]1 , if a → w ∈ Ri (mexo) [β]1 [aβ]0 → [[β]1 †β]0 (mendo) [[ai βi ]0 β i ]1 → [aβi ]0 [β i ]1 (mexo) [β]1 [ai β]0 → [[β]1 †β]0 (mendo) [[β1 α]0 α]1 → [βi α]0 [α]1 (mexo) [[β2 α]0 α]1 → [β1 α]0 [α]1 (mexo) [[β2 α]0 α]1 → [α]0 [α]1 (mexo) (7) [[β]1 †β]0 → [β]1 [†β]0 (mexo) [β]1 [†β]0 → [[β]1 †β]0 (mendo)
(1) (2) (3) (4) (5) (6)
In the initial configuration the string β1 ω is in membrane 0, where ω is the axiom, and β1 indicates that table 1 should be simulated first. The simulation begins with rule 1: membrane 0 enters membrane 1. In membrane 1, the only applicable rule is 2, by which the symbols a ∈ V are replaced by w1 corresponding to the rule a → w ∈ R1 . Rules 1 and 2 can be repeated until all the symbols a ∈ V are replaced according to a rule in R1 , thus obtaining only objects from the alphabet V1 . In order to keep track of which table Ri of rules is simulated, each rule of the form a → w ∈ Ri is rewritten as a → wi . If any symbol a ∈ V is still present in membrane 0, i.e., if some symbol a ∈ V has been left out from the simulation, membrane 1 enters membrane 0, replacing it with the trap symbol † (rule 3), and this triggers a never
3. MUTUAL MOBILE MEMBRANES
155
ending computation (rule 7). Otherwise, rules 1 and 4 are applied as long as required until all the symbols of V1 are replaced with the corresponding symbols of V . Next, if any symbol a1 ∈ V1 has not been replaced, membrane 1 enters membrane 0 and the computation stops, replacing it with the trap symbol † (rule 5), and this triggers a never ending computation (rule 7). Otherwise, we have three possible evolutions (rule 6): (i) if β1 is in membrane 0, then it is replaced by βi , and the computation continues with the simulation of table i; (ii) if β2 is in membrane 0, then it is replaced by β1 , and the computation continues with the simulation of table 1; (iii) if β2 is in membrane 0, then is deleted, and the computation stops. It is clear that P s(Π) contains only the vectors which correspond to the Parikh image of strings in the language generated by G. Corollary 3.23. P sE0L ⊂ P sM M3 (mendo, mexo). Lemma 3.24. LEM3 (mendo, mexo) − ET 0L 6= ∅. / ET 0L [98]. We construct Proof. L={x ∈ {a, b}∗ | |x|b = 2|x|a } ∈ ′ Π = ({a, b, b , †, β, β1 , β, β 1 , β2 , β 2 }, {0, . . . , 2}, [[η b in]1 [in]2 ]0 , R, 1) with rules as given below to generate L. (1) [ β ]1 [ β ]2 → [ [ β ]1 β ]2 , (mendo), [ [ b β ]1 β ]2 → [ b′ b′ β ]1 [ β ]2 , (mexo), [ [ β ]1 β ]2 → [ β ]1 [ β ]2 , (mexo), (2) [ β ]1 [ β ]2 → [ [ a β1 ] β 1 ]2 , (mendo), (3) [ [ b β1 ]1 β 1 ]2 → [ † β1 ]1 [ β 1 ]2 , (mexo), [ † β1 ]1 [ β 1 ]2 → [ [ † β1 ]1 β 1 ]2 , (mendo), [ [ † β1 ]1 β 1 ]2 → [ † β1 ]1 [ β 1 ]2 , (mexo), (4) [ [ β1 ] β 1 ]2 → [ β2 ]1 [ β 2 ]2 , (mexo), [ b′ β2 ]1 [ β 2 ]2 → [ [ b β2 ]1 β 2 ]2 , (mexo), [ [ β2 ]1 β 2 ]2 → [ β2 ]1 [ β 2 ]2 (5) [ β2 ]1 [ β 2 ]2 → [ [ β3 ]1 β 3 ]2 , (mendo), (6) [ [ b′ β3 ]1 β 3 ]2 → [ † β3 ]1 [ β 3 ]2 , (mexo), [ † β3 ]1 [ β 3 ]2 → [ [ † β3 ]1 β 3 ]2 , (mendo), [ [ † β3 ]1 β 3 ]2 → [ † β3 ]1 [ β 3 ]2 , (mexo), (7) [ [ β3 ]1 β 3 ]2 → [ ]1 [ ]2 , (mexo), (8) [ [ β3 ]1 β 3 ]2 → [ β ]1 [ β ]2 , (mexo). The system works as follows: Rule 1 is used to replace every b with b′ b′ . Rule 2 can be used at any moment to replace β and β with β1 and β 1 (guessing that all b’s have been replaced) and also to create an object a. The rule 3 checks that every b has been replaced with b′ b′ , and if not an infinite computation is obtained. If there is no b then rule 4 replaces β1 and β 1 with β2 and β 2 , and then is used to replace every b′ with b. Rule 5 can be used at any moment to replace β2 and β 2 with β3 and β 3 (guessing that all b′ ’s have been replaced). The rule 6 checks that every b′ has been replaced
156 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
with b′ , and if not an infinite computation is obtained. The computation can halt using rule 7, and can continue using rule 8. It is easy to see that membrane 1 contains strings of L at the end of a halting computation. Theorem 3.25. P sEM3 (mendo, mexo) ⊆ P sM AT . Proof. Consider a P system Π = (V, H, µ, w1 , w2 , w3 , R, i). Without loss of generality, assume that H = {1, 2, 3}, µ = [ [ ]2 [ ]3 ]1 and that membrane 2 is the output membrane. Since the symbols of membrane 1 play no role in any of the rules, we assume that w1 = ∅. Construct a matrix grammar G = (N, T, S, M ) without appearance checking, where N = c2 ∪{f, t, f2in3 , f3in2 , t2in3 , t3in2 , h} and T = V c2 . If we consider rules V2 ∪V3 ∪ V of the form [a v]2 [a v ′ ]3 →(mendo) and [[a v]2 a v ′ ]3 → [w]2 [w′ ]3 (mexo) with ′ . . . v ′ , v ′ ∈ V , the matrices m ∈ M are v = v(1) . . . v(n) , vi ∈ V , v ′ = v(1) j (m) as follows: (1) (S → f w2 w3 ), ′ → (2) (f → f2in3 , a2 → w2 , v(1)2 → λ, . . . , v(n)2 → λ, a3 → w3′ , v(1)3 ′ λ, . . . , v(m)3 → λ), if mendo rule [a v]2 [a v ′ ]3 → [[w]2 w′ ]3 ; ′ → (f → f3in2 , a3 → w3 , v(1)3 → λ, . . . , v(n)3 → λ, a2 → w2′ , v(1)2 ′ λ, . . . , v(m)2 → λ), if mendo rule [a v]3 [a v ′ ]2 → [[w]3 w′ ]2 ; ′ → (3) (f2in3 → f, a2 → w2 , v(1)2 → λ, . . . , v(n)2 → λ, a3 → w3′ , v(1)3 ′ λ, . . . , v(m)3 → λ), if mexo rule [ [a v]2 a v ′ ]3 → [w]2 [w′ ]3 ; ′ → (f3in2 → f, a3 → w3 , v(1)3 → λ, . . . , v(n)3 → λ, a2 → w2′ , v(1)2 ′ λ, . . . , v(m)2 → λ), if mexo rule [ [a v]3 a v ′ ]2 → [w]3 [w′ ]2 ; (4) (f → t), (t → t, a2 → ab2 ), (t → t, a3 → λ), (t → h); (5) (f2in3 → t2in3 ), (t2in3 → t2in3 , a2 → ab2 ), (t2in3 → t2in3 , a3 → λ), (t2in3 → h); (6) (f3in2 → t3in2 ), (t3in2 → t3in2 , a2 → ab2 ), (t3in2 → t3in2 , a3 → λ), (t3in2 → h); (7) (h → λ). The simulation of Π starts with the first matrix, where the start symbol S is replaced with the string f w2 w3 ; f is a symbol to keep track of the structure of µ at the current point of computation, w2 , w3 are the initial contents of membranes 2, 3. If the first rule to be applied is a mendo rule which takes membrane 2 inside membrane 3, then by rule 2 f is replaced with f2in3 , while the symbol a2 is replaced with w2 , the symbols of v2 are replaced by λ, a3 is replaced by w3′ and the symbols of v3′ are replaced by λ. Similarly, if the first rule happens to be a mendo rule which takes membrane 3 into membrane 2, then f is replaced with f3in2 while the symbol a3 is replaced with w3 , the symbols of v3 are replaced by λ, a2 is replaced by w2′ and the
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
157
symbols of v3′ are replaced by λ. Rules 2 can be used as long as there are applicable rules in membranes 2, 3. To terminate the computation, we use rules 4 − 7. Rule 4 guesses that there are no more applicable rules when membranes 2 and 3 are adjacent. The symbol f is replaced with t; this is followed by replacing symbols of V2 c2 and the symbols of V3 with λ if there with the corresponding symbols V are no applicable rules in Π. t is then replaced with h. Similarly, rules 5 and 6 guesses that a successful computation has been reached when membrane 2 is inside membrane 3, and when membrane 3 is inside membrane c2 2, respectively. As in rule 4, symbols of V2 are replaced with symbols of V c2 and the symbols of V3 with λ. Rule 7 erases h. Thus, if only symbols of V remain, then a terminal string is obtained. It is easy to see that a terminal string is obtained iff a halting computation is obtained in Π. The terminal string generated by G clearly is the multiset of the contents of the output membrane at the end of a halting computation. 4. Mutual Membranes with Objects on Surface Definition 3.26 ([AC8]). A system of n mutual membranes with objects on surface (M2 OS) is a construct Π = (V, µ, u1 , . . . , un , R), where (1) V is an alphabet (finite, non-empty) of proteins; (2) µ ⊆ H × H describes the membrane structure, such that (i, j) ∈ µ denotes that the membrane identified by j is contained into the membrane identified by i; we distinguish the external membrane (usually called the “skin” membrane) and several internal membranes; a membrane without any other membrane inside it is said to be elementary; the membrane structure has n ≥ 2 membranes; (3) u1 , . . . , un are multisets of proteins (represented by strings over V ) bound to the n membranes of µ at the beginning of the computation (one assumes that the membranes in µ have a precise identification, e.g., by means of labels, or of other “names”, in order to have the marking by means of u1 , . . . , un precisely defined; the labels play no other role than specifying this initial marking of the membranes); the skin membrane is labelled with 1 and u1 = λ; (4) R is a finite set of rules of the following forms: (a) [ ]vau →m [[ ]ux ]vy , where a ∈ V, u, v, x, y ∈ V ∗ , ux, vy ∈ V + pino The object a creates an empty membrane within the membrane where the a object is attached and may be consumed during the process. The multiset u on the empty membrane so created, is transferred from the initial membrane. x and y are newly created multisets of objects. (b) [ ]au [ ]abv →m [[[ ]ux ]b ]vy , where a, a, b ∈ V, u, v, x, y ∈ V ∗ , ux, vy ∈ V + phago
158 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
An object a which also comes with its complementary object a models a membrane (the one with a) “eating” another membrane (the one with a). It proceeds by the membrane containing v wrapping around the membrane containing u and joining itself on the other side. Hence, an additional layer of membrane is created around the eaten membrane: the object on that membrane is b. The objects a and a may be consumed during the evolution. x and y are newly created multisets of objects. (c) [[ ]au ]av →m [ ]uvx , where a, a ∈ V, u, v, x ∈ V ∗ , uvx ∈ V + exo An object a and a complementary object a models the merging of two nested membranes, which starts with the membranes touching at a point. In this process (which is a smooth, continuous process), the content of the membrane containing the multiset au gets expelled to the outside, and all the objects of the two membranes are united into a multiset on the membrane which initially contained v. The objects a and a may be consumed during this evolution, and x is a newly created multiset of objects. Following the line described by us in [AC5] where a structural congruence was define for membrane systems, we describe in Table 1 the structural congruence for systems of mutual membranes with objects on surface. Table 1. Syntax of M2 OS [AC8] Systems M, N :: =M N | [ ]u | [M ]u membranes with objects on surface multisets of objects where a, a ∈ V M ultisets u, v :: =λ | a | a | uv We denote by M the set of membrane systems defined in the Table 1. We abbreviate λu as u and [ ]λ as [ ]. Table 2. Structural Congruence of M2 OS [AC8] M N ≡m N M M (N P ) ≡m (M N ) P
uv ≡m vu u(vw) ≡m (uv)w λu ≡m u
M ≡m N implies M P ≡m N P M ≡m N and u ≡m v implies [M ]u ≡m [N ]v
u ≡m v implies uw ≡m vw
The structural congruence relation is a way of rearranging the system such that the interacting parts can come closer. The rules of Table 3 are added to the rules which appear in Definition 3.26 in order to show how we can construct a multiset of rules applied in a step of evolution.
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
159
Table 3. Reductions of M2 OS [AC8] P →m Q implies P R →m Q R P →m Q implies [P ]u →m [Q]u P ≡m P ′ and P ′ →m Q′ and Q′ ≡m Q implies P →m Q
Par Mem Struct
The operational semantics defined above for systems of mutual membranes with objects on surface are useful in establishing a connection with brane calculus, namely in defining an encoding of a fragment of brane calculus into the systems of mutual membranes with objects on surface. 4.1. Computational Power. In what follows we explore the computational power of systems of mutual membranes with objects on surface in pino/exo, phago/exo cases. The power of this cases was already investigated in [65]; we give an improvement related to the number of membranes used. A summary of the results (existing as well as new ones) is given in Table 4. Table 4. Summary of Results Operations No.membranes Weight RE Reference Pino, exo 8 4,3 Yes Theorem 6.1 [65] Pino, exo 3 5,4 Yes Theorem 3.28 [AC8] Phago, exo 9 5,2 Yes Theorem 6.2 [65] Phago, exo 9 4,3 Yes Theorem 6.2 [65] Phago, exo 4 6,3 Yes Theorem 3.29[AC8] The multiplicity vector of the multiset from all membranes is considered as a result of the computation. Thus, the result of a halting computation consists of all the vectors describing the multiplicity of objects from all the membranes; a non-halting computation provides no output. The set of vectors of natural numbers produced in this way by a system Π is denoted by P s(Π). A computation can produce several vectors, all of them considered in the set P s(Π). The number of objects from the right-hand side of a rule is called its weight. The family of all sets P s(Π) generated by systems of mutual membranes with objects on surface using at any moment during a halting computation at most n membranes, and any of the rules r1 , r2 ∈ {pino, exo, phago} of weight at most r, s respectively, is denoted by P sM 2 OSn (r1 (r), r2 (s)). When one of the parameters is not bounded, we replace it with a ∗. We denote by P sRE the family of Turing computable sets of vectors generated by arbitrary grammars. Lemma 3.27 ([AC8]). (i) P sM 2 OSm (r1 (i), r2 (j)) ⊆ P sM 2 OSm (r1 (i′ ), r2 (j ′ )), for all m ≤ m′ , i ≤ i′ , j ≤ j ′ . (ii) P sM 2 OS∗ (r1 (∗), r2 (∗)) ⊆ P sRE.
160 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
In [65] it is proven that systems of eight membranes with objects on surface and using pino and exo operations of weight four and three are universal. In what follows, we show that we can reduce the number of membranes from eight to three, but in order to do this we need to increase the weight of the pino and exo operations with one, namely from four and three to five and four. This means that in order to construct a system of membranes with objects on surface, using pino and exo operations, which is universal we need to choose if we want to minimize the number of membranes or the weights of the operations. Theorem 3.28 ([AC8]). P sRE = P sM 2 OSm (pino(r), exo(s)), for all m ≥ 3, r ≥ 5, s ≥ 4. Proof. Taking into account Lemma 3.27 (ii), we have to prove only the inclusion P sRE ⊆ P sM 2 OS3 (pino(5), exo(4)). It is proved in [41] that each recursively enumerable language can be generated by a matrix grammar in the strong binary normal form. We consider a matrix grammar G = (N, T, S, M, F ) in the strong binary normal form, having n1 matrices m1 , . . . , mn1 of types 2 and 4, and n2 matrices of type 3 labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, 2, . . . , n1 } are mutually disjoint sets. The initial matrix is m0 : (S → XA). We construct a system of three mutual membranes with objects on surface Π = (V, µ, u1 , u2 , u3 , R) where: µ = {(1, 2); (2, 3)}, u1 = λ, u2 = βX, u3 = βA V = {β, β} ∪ {α, α′ | α ∈ N2 ∪ T } (j) (j)′ ∪ {X, Xl , Xl′ , Xl′′ , Xl , Xl | X ∈ N1 , 1 ≤ l ≤ n1 + n2 , 1 ≤ j ≤ 2} The set of rules R is constructed as follows: (1) For each (nonterminal) matrix ml : (X → Y, A → x), X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , with 1 ≤ l ≤ n1 , we consider the rules: 1. [[ ]βA ]βX → [ ]ββXl A (exo) [[ ]β ]βXA → [ ]ββXl A (exo) 2. [ ]ββXl A → [[ ]βx′ ]βXl , if x 6= λ (pino) (If ml : (X → Y, A → α1 α2 ) then x′ = α1′ α2 or x′ = α1 α2′ , and if ml : (X → Y, A → α1 ) then x′ = α1′ ) [ ]ββXl A → [[ ]βf ]βXl , if x = λ (pino) 3. [[ ]βα′ ]βXl → [ ]ββα′ X ′ (exo) l [[ ]βf ]βXl → [ ]ββf X ′ (exo) l 4. [ ]ββα′ X ′ → [[ ]βα ]βX ′ (pino) l l [ ]ββf X ′ → [[ ]β ]βX ′ (pino) l l 5. [[ ]β ]βX ′ → [ ]ββX ′′ (exo) l l 6. [ ]ββX ′′ → [[ ]β ]βY (pino) l
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
161
7. [[ ]β ]βX → [ ]ββ† (exo) 8. [ ]ββ† → [[ ]β ]β† (pino) 9. [[ ]β ]β† → [ ]ββ† (exo) We start the simulation with rule 1. Initially, the symbol A ∈ N2 , corresponding to the initial matrix will be part of the inner membrane. In later steps however, since we use pino rules which allow an arbitrary distribution, the corresponding A may be found along with X as well. In any case we replace X by Xl marking the beginning of the simulation. This is followed by rule 2, where Xl is used to replace A by either x′ (in case x 6= λ) or f (in case x = λ). Next, we apply rule 3 to replace Xl by Xl′ , in order to prevent replacing any more A’s. In rule 4 we use Xl′ to replace α′ by α, while rule 5 replaces Xl′ by Xl′′ . Rule 6 involves the replacing of Xl′′ by Y , thus successfully simulating a type-2 matrix and returning to the initial membrane structure. If the corresponding symbol A ∈ N2 is not present (we cannot apply rule 1), rule 7 introduces a trap symbol which leads to an infinite computation (rules 8 and 9). (2) For each matrix m′l : (X → Y, B (j) → †), X, Y ∈ N1 , A ∈ N2 , where n1 + 1 ≤ l ≤ n1 + n2 , l ∈ labj , j = 1, 2, we consider the rules: 10. [[ ]β ]βX → [ ]ββX (j) (exo) l
11. [ ]ββX (j) → [[ ]β ]βX (j) (pino) l
l
12. [[ ]βB (j) ]βX (j) → [ ]ββX (j) † (exo) l
l
[[ ]β ]βB (j) X (j) → [ ]ββX (j) † (exo) l
13. [[ ]β ]βX (j) → [ ] l
14. [ ]
(j)′
ββXl
(j)′ ββXl
l
(exo)
→ [[ ]β ]βY (pino)
Rule 10 starts the simulation of a type-3 matrix by replacing (j) X with Xl , thereby remembering the index l of the matrix and the index j of the possible present symbol B (j) . This is followed by rule 11. At this step we need to check if the corresponding symbol B (j) ∈ N2 is present. The symbol B (j) if present, is part either of the membrane containing β or of the membrane containing β. If B (j) is present, rule 12 replaces it with † and by applying rule 11, we go to the configuration before replacing B (j) . Regardless the (j) (j)′ presence of B (j) , rule 13 is applied replacing Xl by Xl . Rule (j) 14 involves the replacing of Xl by Y , thus successfully simulating a type-3 matrix and returning to the initial membrane structure. (3) For a terminal matrix ml : (X → a, A → x), X ∈ N1 , a ∈ T , A ∈ N2 , x ∈ T ∗ , where 1 ≤ l ≤ n1 . 15. [ ]ββX ′′ → [[ ]β ]βa (pino) l 16. [[ ]β ]βA → [ ]ββ† (exo) [[ ]βA ]β → [ ]ββ† (exo)
162 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Starting with rules 1 to 5 and continuing with rule 15 we successfully simulate the terminal matrix. Rule 16, together with rule 8 should be used if any A ∈ N2 exists after simulation of a terminal matrix, leading to an infinite computation. It is enough if we check the applicability of rule 16, after applying all possible rules. The result of a correct simulation is the set of all symbols present on all membranes without the symbols β and β. In [65] it is proven that systems of nine membranes with objects on surface and using phago and exo operations of weight four and three (or five and two) are universal. In what follows, we show that we can reduce the number of membranes from nine to four, but in order to do this we need to increase the weight of the phago and exo operations, namely from four and three (or five and two) to six and three. When constructing a system of membranes with objects on surface, using phago and exo operations, that is universal, we have the same problem as when using pino and exo operations: we need to choose if we want to minimize the number of membranes or the weights of the operations. Theorem 3.29 ([AC8]). P sRE = P sM 2 OSm (phago(r), exo(s)), for all m ≥ 4, r ≥ 6, s ≥ 3. Proof. Taking into account Lemma 3.27 (ii), we have to prove only the inclusion P sRE ⊆ P sM 2 OS4 (phago(6), exo(3)). It is proved in [41] that each recursively enumerable language can be generated by a matrix grammar in the strong binary normal form. We consider a matrix grammar G = (N, T, S, M, F ) in the strong binary normal form, having n1 matrices m1 , . . . , mn1 of types 2 and 4, and n2 matrices of type 3 labelled by m′i , n1 + 1 ≤ i ≤ n1 + n2 with i ∈ labj , for j ∈ {1, 2}, such that lab1 , lab2 , and lab0 = {1, 2, . . . , n1 } are mutually disjoint sets. The initial matrix is m0 : (S → XA). We construct a system of three mutual membranes with objects surface Π = (V, µ, u1 , u2 , u3 , R) where: µ = {(1, 2); (2, 3)}, u1 = λ, u2 = ββX, βA V = {β, β} ∪ {α, α′ | α ∈ N2 ∪ T } (j) (j)′ ∪{X, Xl , Xl′ , Xl′′ , Xl , Xl | X ∈ N1 , 1 ≤ l ≤ n1 + n2 , 1 ≤ j ≤ 2} The set of rules R is constructed as follows: (1) For each (nonterminal) matrix ml : (X → Y, A → x), X, Y ∈ N1 , A ∈ N2 , x ∈ (N2 ∪ T )∗ , with 1 ≤ l ≤ n1 , we consider the rules: 1. [ ]βA [ ]ββX → [[[ ]βA ]β ]βXl (phago) 2. [[ ]β ]βXl → [ ]ββXl (exo) 3. [ ]βA [ ]ββXl → [[[ ]βx′ ]β ]βX ′ (phago) l (If ml : (X → Y, A → α1 α2 ) then x′ = α1′ α2 or x′ = α1 α2′ ,
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
163
and if ml : (X → Y, A → α1 ) then x′ = α1′ ) [ ]βA [ ]ββXl → [[[ ]βf ]β ]βX ′ (phago) l 4. [[ ]β ]βX ′ → [ ]ββX ′ (exo) l l 5. [ ]βα′ [ ]ββX ′ → [[[ ]βα ]β ]βX ′′ (phago) l l [ ]βf [ ]ββX ′ → [[[ ]β ]β ]βX ′′ (phago) l l 6. [[ ]β ]βX ′′ → [ ]ββY (exo) l 7. [ ]β [ ]ββX → [[[ ]β ]β ]β† (phago) 8. [[ ]β ]β† → [ ]ββ† (exo) 9. [ ]β [ ]ββ† → [[[ ]β ]β ]β† (phago) We start the simulation with rule 1. We replace X by Xl marking the beginning of the simulation. This is followed by rule 2. In rule 3, Xl is used to replace A by either x′ (in case x 6= λ) or f (in case x = λ) while Xl is replaced by Xl′ in order to prevent replacing any more A’s. This is followed by rule 4. In rule 5 we use Xl′ to replace α′ by α, while Xl′ is replaced by Xl′′ . Rule 6 involves the replacing of Xl′′ by Y , thus successfully simulating a type-2 matrix and returning to the initial membrane structure. In case the corresponding symbol A ∈ N2 is not present (we cannot apply rule 1), rule 7 introduces a trap symbol which leads to an infinite computation (rules 8 and 9). (2) For each matrix m′l : (X → Y, B (j) → †), X, Y ∈ N1 , A ∈ N2 , where n1 + 1 ≤ l ≤ n1 + n2 , l ∈ labj , j = 1, 2, we consider the rules: 10. [ ]β [ ]ββX → [[[ ]β ]β ]βX (j) (phago) l
11. [[ ]β ]βX (j) → [ ]ββX (j) (exo) l
l
12. [ ]βB (j) [ ]ββX (j) → [[[ ]β ]β ]X (j) † (phago) l
13. [ ]β [ ]ββX (j) → [[[ ]β ]β ] l
14. [[ ]β ]
(j)′
βXl
l
(j)′ Xl
→ [ ]ββY (exo)
(phago)
Rule 10 starts the simulation of a type-3 matrix by replacing (j) X with Xl , thereby remembering the index l of the matrix and the index j of the possible present symbol B (j) . This is followed by rule 11. At this step we need to check if the corresponding symbol B (j) ∈ N2 is present. If B (j) is present, rule 12 replaces it with † and by applying rule 11, we go to the configuration before replacing B (j) . Regardless the presence of B (j) , rule 13 is applied replacing (j) (j) (j)′ Xl by Xl . Rule 14 involves the replacing of Xl by Y , thus successfully simulating a type-3 matrix and returning to the initial membrane structure. (3) For a terminal matrix ml : (X → a, A → x), X ∈ N1 , a ∈ T , A ∈ N2 , x ∈ T ∗ , where 1 ≤ l ≤ n1 . 15. [[ ]β ]βX ′′ → [ ]ββa (exo) l 16. [ ]βA [ ]ββ → [[[ ]β ]β ]β† (phago)
164 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Starting with rules 1 to 5 and continuing with rule 15 we successfully simulate the terminal matrix. Rule 16, together with rule 8 should be used if any A ∈ N2 exists after simulation of a terminal matrix, leading to an infinite computation. It is enough if we check the applicability of rule 16, after applying all possible rules. The result of a correct simulation is the set of all symbols present on all membranes without the symbols β and β. 4.2. Relationship of PEP Calculus and Mutual Membranes with Objects on Surface. Some work was done trying to relate membrane systems and brane calculus [13, 15, 26, 65, 66]. Inspired by brane calculus, a model of membrane system having objects attached to the membranes has been introduced in [19]. In [12], a class of membrane systems containing both free floating objects and objects attached to membranes have been proposed. We are continuing this research line, and simulate a fragment of brane calculus by using systems of mutual membranes with objects on surface. “At the first sight, the role of objects placed on membranes is different in membrane and brane systems: in membrane computing, the focus is on the evolution of objects themselves, while in brane calculi the objects (“proteins”) mainly control the evolution of membranes” [91]. By defining an encoding of the PEP fragment of brane calculus into systems of mutual membranes with objects on surface, we show that the difference between the two models is not significant. Another difference regarding the semantics of the two formalisms is expressed in [13]: “whereas brane calculi are usually equipped with an interleaving, sequential semantics (each computational step consists of the execution of a single instruction), the usual semantics in membrane computing is based on maximal parallelism (a computational step is composed of a maximal set of independent interactions).” 4.2.1. Brane Calculi. Brane calculus [17] deals with membranes representing the sites of activity; thus a computation happens on the membrane surface. The operations of the two basic brane calculi, pino, exo, phago (PEP) and mate, drip, bud (MBD) are directly inspired by biologic processes such as endocytosis, exocytosis and mitosis. The latter operations can be simulated using the formers one [17]. In what follows we give an overview of PEP calculus (phago/exo/pino) without replication; more details can be found in [17]. A membrane structure consists of a collection of nested membranes as can be seen from Table 5. Membranes are formed of patches, where a patch s can be composed from other patches s = s1 | s2 . An elementary patch s consists of an action a followed, after the consumption of it, by another patch s1 : s = a.s1 . Actions often come in complementary pairs which cause the interaction between membranes. The names n are used to pair-up actions and co-actions. Cardelli motivates that the replication operator is used to model the notion
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
165
of a “multitude” of components of the same kind, which is in fact a standard situation in biology [17]. We do not use the replicator operator because we are not able to define a membrane system without knowing exactly the initial membrane structure. Table 5. PEP Calculus - Syntax [17] Systems P, Q:: =P ◦ Q | σ( ) | σ(P ) nests of membranes Branes σ, τ :: =O | σ | τ | a.σ combinations of actions Actions a, b :: =nց | nց (σ) | nտ | nտ | pino(σ) phago ց, exo տ
We denote by P the set of brane systems defined in Table 5. We abbreviate a.0 as a, 0(P ) as (P ), and 0( ) as ( ). Table 6. PEP Calculus - Structural Congruence [17] P ◦ Q ≡b Q ◦ P P ◦ (Q ◦ R) ≡b (P ◦ Q) ◦ R
σ | τ ≡b τ | σ σ | (τ | ρ) ≡b (σ | τ ) | ρ σ | 0 ≡b σ
P ≡b Q implies P ◦ R ≡b Q ◦ R σ ≡b τ implies σ | ρ ≡b τ | ρ P ≡b Q and σ ≡b τ implies σ(P ) ≡b τ (Q) σ ≡b τ implies a.σ ≡b a.τ The structural congruence relation is a way of rearranging the system such that the interacting parts come together, as illustrated in Table 6. Table 7. PEP Calculus - Reduction Rules [17] pino(ρ).σ | σ0 (P ) →b σ | σ0 (ρ( ) ◦ P ) nտ .τ | τ0 (nտ .σ | σ0 (P ) ◦ Q) →b P ◦ σ | σ0 | τ | τ0 (Q) nց .σ | σ0 (P ) ◦ nց (ρ).τ | τ0 (Q) →b τ | τ0 (ρ(σ | σ0 (P )) ◦ Q) P →b Q implies P ◦ R →b Q ◦ R P →b Q implies σ(P ) →b σ(Q) P ≡b P ′ and P ′ →b Q′ and Q′ ≡b Q implies P →b Q
Pino Exo Phago Par Mem Struct
In what follows we explain the rules of Table 7. The action pino(ρ) creates an empty bubble within the membrane where the pino action resides; we should imagine that the original membrane buckles towards inside and pinches off. The patch σ on the empty bubble is a parameter of pino. The exo action nտ , which comes with a complementary co-action nտ , models the merging of two nested membranes, which starts with the membranes touching at a point. In the process (which is a smooth, continuous process), the subsystem P gets expelled to the outside, and all the residual patches of the two membranes become contiguous. The phago action nց , which also comes with a complementary co-action nց (ρ), models a membrane (the one
166 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
with Q) “eating” another membrane (the one with P ). Again, the process has to be smooth and continuous, so it is biologically implementable. It proceeds by the Q membrane wrapping around the P membrane and joining itself on the other side. Hence, an additional layer of membrane is created around the eaten membrane: the patch on that membrane is specified by the parameter ρ of the cophago action (similar to the parameter of the pino action). 4.2.2. Relationship. Definition 3.30 ([AC8]). A translation T : P → M is given by if P = σ( ) [ ]S(σ) T (P ) = [T (R)]S(σ) if P = σ(R) T (Q) T (R) if P = Q | R where S : P → A is defined as: σ if σ = nց or σ = nտ or σ = nտ nց S(ρ) if σ = nց (ρ) pino S(ρ) if σ = pino(ρ) S(σ) = S(a) S(ρ) if σ = a.ρ S(τ ) S(ρ) if σ = τ | ρ λ if σ = 0
The rules of the systems of mutual membranes with objects on surface are of the form: Table 8. Pino/Exo/Phago Rules of M2 OS [AC8] [ ]S(nց .σ|σ0 ) [ ]S(nց (ρ).τ |τ0 ) →m [[[ ]S(σ|σ0 ) ]S(ρ) ]S(τ |τ0 ) [[ ]S(nտ .σ|σ0 ) ]S(nտ .τ |τ0 ) →m [ ]S(σ|σ0 |τ |τ0 ) [ ]S(pino(ρ).σ|σ0 ) →m [[ ]S(ρ) ]S(σ|σ0 ) The next proposition states that two PEP systems which are structurally equivalent are translated into two systems of mutual membranes with objects on surface which are structurally equivalent. Proposition 3.31 ([AC8]). If P is a PEP system and M = T (P ) is a system of mutual membranes with objects on surface, then there exists N such that M ≡m N and N = T (Q), whenever P ≡b Q. Proof. We proceed by structural induction. If P = P1 ◦ P2 where P1 and P2 are two brane systems, then from the definition of ≡b we have that Q = P2 ◦ P1 . Using the definition of T and P = P1 ◦ P2 we have that M = T (P ) = T (P1 )T (P2 ). From Q = P2 ◦ P1 and the definition of T we have that there exists N such that N = T (Q) = T (P2 )T (P1 ). From the definition of ≡m we get M ≡m N . If P = P1 ◦ (P2 ◦ P3 ) where P1 , P2 and P3 are three brane systems, then from the definition of ≡b we have that Q = (P1 ◦P2 )◦P3 . Using the definition of T and P = P1 ◦ (P2 ◦ P3 ) we have that M = T (P ) = T (P1 )(T (P2 )T (P3 )).
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
167
From Q = (P1 ◦ P2 ) ◦ P3 and the definition of T we have that there exists N such that N = T (Q) = (T (P1 )T (P2 ))T (P3 ). From the definition of ≡m we get M ≡m N . Let P = P1 ◦ P3 and Q = P2 ◦ P3 such that P1 ≡b P2 . From the definition of ≡b it results that P ≡b Q. Using the definition of T we have that M = T (P ) = T (P1 )T (P3 ) and there exists N such that N = T (Q) = T (P2 )T (P3 ). Using the structural induction, from P1 ≡b P2 it results that T (P1 ) ≡m T (P2 ). From the definition of ≡m we get M ≡m N . Let P = ρ(P1 ) and Q = τ (P2 ) such that P1 ≡b P2 and ρ ≡b τ . From the definition of ≡b it results that P ≡b Q. Using the definition of T we have that M = T (P ) = [T (P1 )]S(ρ) and there exists N such that N = T (Q) = [T (P2 )]S(τ ) . Using the structural induction, from P1 ≡b P2 it results that T (P1 ) ≡m T (P2 ) and from ρ ≡b τ it results that S(ρ) ≡m S(τ ). From the definition of ≡m we get M ≡m N . In what follows we prove that indeed from ρ ≡b τ it results that S(ρ) ≡m S(τ ). We proceed also by structural induction. If ρ = ρ1 | ρ2 where ρ1 and ρ2 are two combinations of brane actions, then from the definition of ≡b we have that τ = ρ2 | ρ1 . Using the definition of S and ρ = ρ1 | ρ2 we have that u = S(ρ) = S(ρ1 )S(ρ2 ). From τ = ρ2 | ρ1 and the definition of S we have that there exist v such that v = S(τ ) = S(τ2 )S(τ1 ). From the definition of ≡m we get u ≡m v. If ρ = ρ1 | (ρ2 | ρ3 ) where ρ1 , ρ2 and ρ3 are three combinations of brane actions, then from the definition of ≡b we have that τ = (ρ1 | ρ2 ) | ρ3 . Using the definition of S and ρ = ρ1 | (ρ2 | ρ3 ) we have that u = S(ρ) = S(ρ1 )(S(ρ2 )S(ρ3 )). From τ = (ρ1 | ρ2 ) | ρ3 and the definition of S we have that there exists v such that v = S(τ ) = (S(ρ1 )S(ρ2 ))S(ρ3 ). From the definition of ≡m we get u ≡m v. If ρ = ρ1 | 0 where ρ1 is a combination of brane actions, then from the definition of ≡b we have that τ = ρ1 . Using the definition of S and ρ = ρ1 | 0 we have that u = S(ρ) = S(ρ1 )λ. From τ = ρ1 and the definition of S we have that there exist v such that v = S(τ ) = S(τ1 ). From the definition of ≡m we get u ≡m v. Let ρ = ρ1 | ρ3 and τ = ρ2 | ρ3 such that ρ1 ≡b ρ2 . From the definition of ≡b it results that ρ ≡b τ . Using the definition of S we have that u = S(ρ) = S(ρ1 )S(ρ3 ) and there exists v such that v = S(τ ) = S(ρ2 )S(ρ3 ). Using the structural induction, from ρ1 ≡b ρ2 it results that S(ρ1 ) ≡m S(ρ2 ). From the definition of ≡m we get u ≡m v. Let ρ = a.ρ1 and τ = a | ρ2 such that ρ1 ≡b ρ2 . From the definition of ≡b it results that ρ ≡b τ . Using the definition of S we have that u = S(ρ) = S(a)S(ρ1 ) and there exists v such that v = S(τ ) = S(a)S(ρ2 ). Using the structural induction, from ρ1 ≡b ρ2 it results that S(ρ1 ) ≡m S(ρ2 ). From the definition of ≡m we get u ≡m v.
168 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Proposition 3.32 ([AC8]). If P is a PEP system and M = T (P ) is a system of mutual membranes with objects on surface, then there exists Q such that N = T (Q) whenever M ≡m N . Proof. If P = P1 ◦ P2 where P1 and P2 are two brane systems, then from the definition of T we have that M = T (P ) = T (P1 )T (P2 ). From the definition of ≡m we get M ≡m N , where N = T (Q) = T (P2 )T (P1 ). For this N there exists Q with Q = P2 | P1 such that N = T (Q). If P = P1 ◦ (P2 ◦ P3 ) where P1 , P2 and P3 are three brane systems, then from the definition of T we have that M = T (P ) = T (P1 )(T (P2 )T (P3 )). From the definition of ≡m we get M ≡m N , where N = T (Q) = (T (P1 )T (P2 ))T (P3 ). For this N there exists Q with Q = (P1 ◦ P2 ) ◦ P3 such that N = T (Q). If P = P1 ◦ P3 , where P1 and P3 are two brane systems, then from the definition of T we have that M = T (P ) = T (P1 )T (P2 ). From the definition of ≡m we get M ≡m N , where N = T (Q) = T (P2 )T (P3 ) for T (P1 ) ≡m T (P2 ). For this N there exists Q with Q = P2 ◦ P3 such that N = T (Q). If P = ρ(P1 ), where P1 is a brane system and ρ is a combination of brane actions, then from the definition of T we have that M = T (P ) = [T (P1 )]S(ρ) . From the definition of ≡m we get M ≡m N , where N = T (Q) = [T (P2 )]S(τ ) for T (P1 ) ≡b T (P2 ) and S(ρ) ≡m S(τ ). For this N there exists Q with Q = σ(P2 ) and S(τ ) ≡m S(σ) such that N = T (Q). Remark 3.33 ([AC8]). In Proposition 3.32 it is possible that P 6≡b Q. Suppose P = nց .nտ ( ). By translation we obtain M = T = [ ]nց nտ ≡m [ ]nտ nց =N. It is possible to have Q = nտ .nց ( ) or Q = nտ | nց ( ) such that N = T (Q), but P 6≡b Q. Proposition 3.34 ([AC8]). If P is a PEP system and M = T (P ) is a system of mutual membranes with objects on surface, then there exists N such that M →m N and N = T (Q), whenever P →b Q. Proof. We proceed by structural induction. If P = pino(ρ).σ | σ0 (P1 ), where P1 is a brane system and pino(ρ).σ | σ0 is a combination of brane actions, then from the definition of →b we have that Q = σ | σ0 (ρ( ) ◦ P1 ). Using the definition of T and P = pino(ρ).σ | σ0 (P1 ) we have that M = T (P ) = [T (P1 )]S(pino(ρ).σ|σ0 ) . From Q = σ | σ0 (ρ( ) ◦ P1 ) and the definition of T we have that there exist N such that N = T (Q) = [[T (P1 )]S(ρ) ]S(σ | σ0 ). From the definition of →m we get M →m N . If P = nտ .τ | τ0 (nտ .σ | σ0 (P1 ) ◦ P2 ), where P1 , P2 are two brane system and nտ .τ | τ0 , nտ .σ | σ0 are combinations of brane actions, then from the definition of →b we have that Q = P1 ◦ σ | σ0 | τ | τ0 (P2 ). Using the definition of T and P = nտ .τ | τ0 (nտ .σ | σ0 (P1 ) ◦ P2 ) we have that M = T (P ) = [[T (P1 )]S(nտ .σ|σ0 ) T (P2 )]S(nտ .τ |τ0 ) . From Q = P1 ◦ σ | σ0 | τ | τ0 (P2 ) and the definition of T we have that there exist N such that
4. MUTUAL MEMBRANES WITH OBJECTS ON SURFACE
169
N = T (Q) = T (P1 )[T (P2 )]S(σ|σ0 |τ |0 ) . From the definition of →m we get M →m N . If P = nց .σ | σ0 (P1 ) ◦ nց (ρ).τ | τ0 (Q2 ), where P1 , P2 are two brane system and nց .σ | σ0 , nց (ρ).τ | τ0 are combinations of brane actions, then from the definition of →b we have that Q = τ | τ0 (ρ(σ | σ0 (P1 )) ◦ P2 ). Using the definition of T and P = nց .σ | σ0 (P1 ) ◦ nց (ρ).τ | τ0 (Q2 ) we have that M = T (P ) = [T (P1 )]S(nց .σ|σ0 ) [T (P2 )]S(nց (ρ).τ |τ0 ) . From Q = τ | τ0 (ρ(σ | σ0 (P1 )) ◦ P2 ) and the definition of T we have that there exist N such that N = T (Q) = [[[T (P1 )]S(σ|σ0 ) ]S(ρ) T (P2 )]S(τ |τ0 ) . From the definition of →m we get M →m N . Let P = P1 ◦ P3 and Q = P2 ◦ P3 such that P1 →b P2 . From the definition of →b it results that P →b Q. Using the definition of T we have that M = T (P ) = T (P1 )T (P3 ) and there exists N such that N = T (Q) = T (P2 )T (P3 ). Using the structural induction, from P1 →b P2 it results that T (P1 ) →m T (P2 ). From the definition of →m we get M →m N . Let P = σ(P1 ) and Q = σ(P2 ) such that P1 →b P2 . From the definition of →b it results that P →b Q. Using the definition of T we have that M = T (P ) = [T (P1 )]S(σ) and there exists N such that N = T (Q) = [T (P2 )]S(σ) . Using the structural induction, from P1 →b P2 it results that T (P1 ) →m T (P2 ). From the definition of →m we get M →m N . Let P → Q such that P ≡b P ′ , P ′ → Q′ and Q → Q′ . Using the definition of T we have that M = T (P ) = T (P ′ ) and there exists N such that N = T (Q) = T (Q′ ). Using the structural induction, from P1 →b P2 it results that T (P1 )→mT (P2 ). From the definition of →m we get M→mN . Proposition 3.35 ([AC8]). If P is a PEP system and M = T (P ) is a system of mutual membranes with objects on surface, then there exists Q such that N = T (Q) whenever M →m N . Proof. If P = pino(ρ).σ | σ0 (P1 ), where P1 is a brane system and pino(ρ).σ | σ0 is a combination of brane actions, then from the definition of T we have that M = T (P ) = [T (P1 )]S(pino(ρ).σ|σ0 ) . From the definition of →m we get M →m N , where N = [[T (P1 )]S(ρ) ]S(σ | σ0 ). For this N there exists Q with Q = σ1 (ρ1 ( ) ◦ P1 ) with S(σ1 ) = S(σ | σ0 ) and S(ρ1 ) = S(ρ) such that N = T (Q). If P = nտ .τ | τ0 (nտ .σ | σ0 (P1 ) ◦ P2 ), where P1 , P2 are two brane system and nտ .τ | τ0 , nտ .σ | σ0 are combinations of brane actions, then from the definition of T we have that M = T (P ) = [[T (P1 )]S(nտ .σ|σ0 ) T (P2 )]S(nտ .τ |τ0 ) . From the definition of →m we get M →m N , where N = T (P1 )[T (P2 )]S(σ|σ0 |τ |0 ) . For this N there exists Q with Q = P1 ◦ σ1 (P2 ) and S(σ1 ) = S(σ | σ0 | τ | τ0 ) such that N = T (Q). If P = nց .σ | σ0 (P1 ) ◦ nց (ρ).τ | τ0 (Q2 ), where P1 , P2 are two brane system and nց .σ | σ0 , nց (ρ).τ | τ0 are combinations of brane actions, then from the definition of T we have that M = T (P ) = [T (P1 )]S(nց .σ|σ0 ) [T (P2 )]S(nց (ρ).τ |τ0 ) . From the definition of →m we get
170 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
M →m N , where N = [[[T (P1 )]S(σ|σ0 ) ]S(ρ) T (P2 )]S(τ |τ0 ) . For this N there exists Q with Q = τ1 (ρ1 (σ1 (P1 )) ◦ P2 ) and S(τ1 ) = S(τ | τ0 ), S(ρ1 ) = S(ρ), S(σ1 ) = S(σ | σ0 ) such that N = T (Q). If P = P1 ◦P3 where P1 , P3 are two brane system, then from the definition of T we have that M = T (P ) = T (P1 )T (P3 ). From the definition of →m we get M →m N , where N = T (P2 )T (P3 ) for T (P1 ) →m T (P2 ). For this N there exists Q with Q = P2 ◦ P3 such that N = T (Q). If P = σ(P1 ) where P1 is a brane system, then from the definition of T we have that M = T (P ) = [T (P1 )]S(σ) . From the definition of →m we get M →m N , where N = [T (P2 )]S(σ) for T (P1 ) →m T (P2 ). For this N there exists Q with Q = σ1 (P2 ) and S(σ1 ) = S(σ) such that N = T (Q). If P ≡b P ′ where P ′ is a brane system, then from the definition of T we have that M = T (P ) = T (P ′ ). From the definition of →m we get M →m N , where N = T (Q′ ) for T (P ′ ) →m T (Q′ ). For this N there exists Q with Q ≡b Q′ such that N = T (Q). The next remark is a consequence of the fact that we translate a formalism with an interleaving semantic into a formalism with a parallel semantic. Remark 3.36 ([AC8]). In Proposition 3.35 it is possible that P 6→b Q. Suppose P = nտ .nտ (nտ .nց ( )). By translation we obtain M = (( )nտ .nց )nտ .nտ , such that M →m [ ]nց nտ . We observe that there exist Q = nց .nտ ( ) such that N = T (Q), but P 6→b Q. The PEP calculus could be extended as in [17] to contain also molecules inside the membranes. A new reduction simulates the exchanging of molecules simultaneous between the interior and exterior of a membrane. In this case the translation can be easily extended by introducing objects in membranes as in [12] and an antiport evolution rule in the definition of →m . 5. Mobile Membranes with Timers There are some papers using time in the context of membrane computing. However time is defined and used in a different manner than here. In [24] a timed P system is introduced by associating to each rule a natural number representing the time of its execution. Then a P system which always produces the same result, independently from the execution times of the rules, is called a time-independent P systems. The notion of timeindependent P systems tries to capture the class of systems which are robust against the environment influences over the execution time of the rules of the system. Other types of time-free systems are considered in [20, 25]. Time of the rules execution is stochastically determined in [23]. Experiments on the reliability of the computations have been considered, and links with the idea of time-free systems are also discussed.
5. MOBILE MEMBRANES WITH TIMERS
171
Time can also be used to “control” the computation, for instance by appropriate changes in the execution times of the rules during a computation, and this possibility has been considered in [27]. Moreover, timed P automata have been proposed and investigated in [7], where ideas from timed automata have been incorporated into timed P systems. Frequency P systems has been introduced and investigated in [78]. In frequency P systems each membrane is clocked independently from the others, and each membrane operates at a certain frequency which could change during the execution. Dynamics of such systems have been investigated. If one supposes the existence of two scales of time (an external time of the user, and an internal time of the device), then it is possible to implement accelerated computing devices which can have more computational power than Turing machines. This approach has been used in [16] to construct accelerated P systems where acceleration is obtained by either decreasing the size of the reactors or by speeding-up the communication channels. In [21, 51] the time of occurrence of certain events is used to compute numbers. If specific events (such as the use of certain rules, the entering/exit of certain objects into/from the system) can be freely chosen, then it is easy to obtain computational completeness results. However, if the length (number of steps) are considered as result of the computation, non-universal systems can be obtained. In [51, 80, 88] time is considered as the result of the computation by using special “observable” configurations taken in regular sets (with the time elapsed between such configurations considered as output). In particular, in [51, 80] P systems with symport and antiport rules are considered for proving universality results, and in [88] this idea is applied to P systems with proteins embedded on the membranes. We have also considered time to “control” the computation in mobile ambients [AC2, AC3, AC9]. Timers define timeouts for various resources, making them available only for a determined period of time. The passage of time is given by a discrete global time progress function. The evolution of complicated real systems frequently involves various interdependence among components. Some mathematical models of such systems combine both discrete and continuous evolutions on multiple time scales with many orders of magnitude. For example, in nature the molecular operations of a living cell can be thought of such a dynamical system. The molecular operations happen on time scales ranging from 10−15 to 104 seconds, and proceed in ways which are dependent on populations of molecules ranging in size from as few as approximately 101 to approximately as many as 1020 . Molecular biologists have used formalisms developed in computer science (e.g. hybrid Petri nets) to get simplified models of portions of these transcription and gene regulation processes. According to [72]: (i) “the life span of intracellular proteins varies from as short as a few minutes for mitotic cyclins, which help regulate passage through
172 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
mitosis, to as long as the age of an organism for proteins in the lens of the eye.” (ii) “Most cells in multicellular organisms . . . carry out a specific set of functions over periods of days to months or even the lifetime of the organism (nerve cells, for example).” It is obvious that timers play an important role in the biological evolution. We use an example from the immune system. Example 3.37 ([53]). T-cell precursors arriving in the thymus from the bone marrow spend up to a week differentiating there before they enter a phase of intense proliferation. In a young adult mouse the thymus contains around 108 to 2×108 thymocytes. About 5×107 new cells are generated each day; however, only about 106 to 2 × 106 (roughly 2 − 4%) of these will leave the thymus each day as mature T cells. Despite the disparity between the numbers of T cells generated daily in the thymus and the number leaving, the thymus does not continue to grow in size or cell number. This is because approximately 98% of the thymocytes which develop in the thymus also die within the thymus. Inspired from these biological facts, we add timers to objects and membranes. We use a global clock to simulate the passage of time in a membrane system. Definition 3.38 ([AC13]). A system of n ≥ 1 mutual mobile membranes with timers is a construct Π = (V, H, µ, w1 , . . . , wn , R, T, iO ), where (1) V , H, µ, w1 , . . . , wn , iO are as in Definition 3.18. (2) T ⊆ {∆t | t ∈ N} is a set of timers assigned to membranes and objects of the initial configuration; a timer ∆t indicates that the resource is available only for a determined period of time t; (3) R is a finite set of developmental rules of the following forms: object time-passing ∆t ∆(t−1) (a) a → a , for all a ∈ V and t > 0 If an object a has a timer t > 0, then its timer is decreased. object dissolution (b) a∆0 → λ, for all a ∈ V If an object a has its timer equal to 0, then the object is replaced with the empty multiset λ, and so simulating the degradation of proteins. mutual endocytosis f ∆(th −1) ′ ∆tf ∆teu v ′ ∆tf h m for v ′ ]∆tm → [ [w ∆tw ] w w′ ]∆t [u (c) [u∆teu v ∆tev ]∆t m m h h + ′ ′ ∗ h, m ∈ H, u, u ∈ V , v, v , w, w ∈ V and all timers are greater than 0; For a multiset of objects u, teu is a multiset of positive integers representing the timers of objects from u. An elementary membrane labelled h enters the adjacent membrane labelled m under the control of the multisets of objects u and u. The labels
5. MOBILE MEMBRANES WITH TIMERS
173
h and m remain unchanged during this process; however the multisets of objects uv and uv ′ are replaced with the multisets of objects w and w′ , respectively. If an object a from the multiset uv has the timer ta , and is preserved in the multiset w, then its timer is now ta − 1. If there is an object which appears in w but it does not appear in uv, then its timer is given according to the right hand side of the rule. Similar reasonings for the multisets uv ′ and w′ . The timer th of the active membrane h is decreased, while the timer tm of the passive membrane m remains the same in order to allow being involved in other rules. mutual exocytosis ′ ′ e ∆th ∆tm e ∆tf w ]∆(th −1) [w ∆tf w′ ]∆tm for v ′ [u∆tu v ∆tv ] → [w ] (d) [u ∆teu v ∆tf m m h h h, m ∈ H, u, u ∈ V + , v, v ′ , w, w′∈ V ∗ and all timers are greater than 0; An elementary membrane labelled h exits a membrane labelled m, under the control of the multisets of objects u and u. The labels of the two membranes remain unchanged, but the multisets of objects uv and uv ′ are replaced with the multisets of objects w and w′ , respectively. The notations and the method of decreasing the timers are similar as for the previous rule. membrane time-passing ∆(t−1) → [ ] (e) [ ]∆t , for all h ∈ H h h For each membrane which did not participate as an active membrane in a rule of type (c) or (d), if its timer is t > 0, this timer is decreased. membrane dissolution ∆0 , for all h ∈ H; (f) [ ]∆0 → [δ] h h A membrane labelled h is dissolved when its timer reaches 0. These rules are applied according to the following principles: (1) All the rules are applied in parallel: in a step, the rules are applied to all objects and to all membranes; an object can only be used by one rule and is non-deterministically chosen (there is no priority among rules), but any object which can evolve by a rule of any form, should evolve. (2) The membrane m from the rules of type (c) − (d) is said to be passive (marked by the use of u), while the membrane h is said to be active (marked by the use of u). In any step of a computation, any object and any active membrane can be involved in at most one rule, while passive membranes are not considered involved in the use of rules (c) and (d), hence they can be used by several rules (c) and (d) at the same time. Finally rule (e) is applied to passive membranes and other unused membranes; this indicates the end of a time-step.
174 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
(3) When a membrane is moved across another membrane, by endocytosis or exocytosis, its whole contents (its objects) are moved. (4) If a membrane exits the system (by exocytosis), then its evolution stops. (5) An evolution rule can produce the special object δ to specify that, after the application of the rule, the membrane where the rule has been applied has to be dissolved. After dissolving a membrane, all objects and membranes previously contained in it become contained in the immediately upper membrane. (6) The skin membrane has the timer equal to ∞, so it can never be dissolved. (7) If a membrane or object has the timer equal to ∞, when applying the rules simulating the passage of time we use the equality ∞−1 = ∞. 5.1. Mobile Membranes With and Without Timers. The following results describing some relationships between systems of mutual mobile membranes with timers and systems of mutual mobile membranes without timers. Proposition 3.39 ([AC13]). For every systems of n mutual mobile membranes without timers there exists a systems of n mutual mobile membrane with timers having the same evolution and output. Proof. It is easy to prove that the systems of mutual mobile membranes with timers includes the systems of mutual mobile membranes without timers, since we can assign ∞ to all timers appearing in the membrane structure and evolution rules. A somehow surprising result is that mutual mobile membranes with timers can be simulated by mutual mobile membrane without timers. Proposition 3.40 ([AC13]). For every systems of n mutual mobile membranes with timers there exists a systems of n mutual mobile membrane without timers having the same evolution and output. Proof. We use the notation rhs(r) to denote the multisets which appear in the right hand side of a rule r. This notation is extended naturally to multisets of rules: given a multiset of rules R, the right hand side of the multiset rhs(R) is obtained by adding the right hand sides of the rules in the multiset, considered with their multiplicities. Each object a ∈ V from a system of mutual mobile membranes with timers has a maximum lifetime (we denote it by lif etime(a)) which can be calculated as follows: e lif etime(a) = max({t | a∆t ∈ witi , 1 ≤ i ≤ n} ∪ {t | a∆t ∈ rhs(R)}) In what follows we present the steps which are required to build a systems of mutual mobile membranes without timers starting from a system
5. MOBILE MEMBRANES WITH TIMERS
175
of mutual mobile membranes with timers, such that both provide the same result and have the same number of membranes. (1) A membrane structure from a system of mutual mobile membrane with timers w∆et mem∆tmem is translated into a membrane structure of a system of mutual mobile membranes without timers in the following way w
bg w0 bmem 0
mem The timers of elements from a system of mutual mobile membranes with timers are simulated using some additional objects in the corresponding system of mutual mobile membranes without timers, as we show at the next steps of the translation. The object bmem 0 placed inside the membrane labelled mem is used to simulate the passage of time for the membrane. The initial multiset of objects w∆et from membrane mem in the system of mutual mobile membranes with timers is translated into the multiset w inside membrane mem in the corresponding system of mutual mobile membranes without timers together with a multiset of objects bg w 0 . The g multiset bw 0 is constructed as follows: for each object a ∈ w, an object ba 0 is added in membrane mem in order to simulate the passage of time. (2) The rules a∆t → a∆(t−1) , a ∈ V , 0 < t ≤ lif etime(a) from the system of mutual mobile membranes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: a ba t → a ba (t+1) , for all a ∈ V and 0 ≤ t ≤ lif etime(a) − 1 The object ba t is used to keep track of the units of time t which have passed since the object a was created. This rule simulates the passage of a unit of time from the lifetime of object a in the system of mutual mobile membranes with timers, by increasing the second subscript of the object ba t in the system of mutual mobile membranes without timers. (3) The rules a∆0 → λ, a ∈ V from the system of mutual mobile membranes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: aba ta → λ for all a ∈ V such that ta = lif etime(a)
176 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
If an object ba ta has the second subscript equal with lif etime(a) in the system of mutual mobile membranes without timers, it means that the timer of object a is 0 in the corresponding system of mutual mobile membranes with timers. In this case, the objects ba ta and a are replaced by λ, thus simulating that the object a disappears together with its timer in the system of mutual mobile membranes with timers. ∆teu v ′ ∆tf h v ′ ]∆tm → (4) The rules [u∆teu v ∆tev ]∆t m h [u ′ ∆(th −1) ∆tf′ ∆tm f ∆ t + ′ ′ w ]h w w ]m , h, m ∈ H, u, u ∈ V , v, v , w, w ∈ V ∗ [ [w with all the timers greater than 0, from the system of mutual mobile membranes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: [u bg bg bh t3 ]h [u bg v ′ b] bh t6 ]m → u t1 v v t2 u t4 v ′ t5 g [ [w bw t7 bh (t3+1) ]h w b] w′ t8 bh (t6+1) ]m , where h, m ∈ H, u, u ∈ + ′ ′ ∗ V , v, v , w, w ∈ V and for each baj we have that 0 ≤ j ≤ lif etime(a) − 1. A multiset bg u t1 consists of all objects ba j , where a is an object from the multiset u. If an object a from the multiset uv has its timer ta and it appears in the multiset w, then its timer becomes ta − 1. If there is an object which appears in w but it is not in uv, then its timer is given according to the right hand side of the rule. Similar reasonings are also true for the multisets uv ′ and w′ . The timer of the active membrane h is increased (object bh t3 is replaced by bh (t3+1) ), while the timer of the passive membrane m remains the same in order to allow being used in other rules. ′ e ∆th ∆tm e v ′ [u∆tu v ∆tv ] → (5) The rules [u ∆teu v ∆tf h ]m ′ ∆tf ∆t ∆(t −1) f h ∆ t + ′ ′ [w w′ ]m m , h, m ∈ H, u, u ∈ V , v, v , w, w ∈ V ∗ [w w ]h with all the timers greater than 0, from the system of mutual mobile membranes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: bh t6 [u bg bg bh t3 ]h ]m → v ′ b] [u bg u t1 v v t2 u t4 v ′ t5 ] [w bg b ] [w b b ] , where h, m ∈ H, u, u ∈ w t7 h (t3+1) h w′ t8 h (t6+1) m + ′ ′ ∗ V , v, v , w, w ∈ V and for each baj we have that 0 ≤ j ≤ lif etime(a) − 1. The way these rules work is similar to the previous case. ∆(t−1) (6) The rules [ ]∆t from the system of mutual mobile memh → [ ]h branes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: bh t → bh (t+1) for all h ∈ H and 0 ≤ t ≤ th − 1.
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
177
For a membrane h from the system of mutual mobile membranes with timers, th represents its lifetime. The object bh t is used to keep track of the units of time t which have passed from the lifetime of the membrane h. This rule simulates the passage of a unit of time from the lifetime of membrane h in the system of mutual mobile membranes with timers, by increasing the second subscript of the object bh t in the system of mutual mobile membranes without timers. ∆0 (7) The rules [ ]∆0 h → [δ]h from the system of mutual mobile membranes with timers can be simulated in the system of mutual mobile membranes without timers using the following rules: [bh t ]h → [δ]h for all h ∈ H such that t = th If an object bh t has the second subscript equal with th in the system of mutual mobile membranes without timers, it means that the timer of membrane h is 0 in the corresponding system of mutual mobile membranes with timers. In this case, the object bh t is replaced by δ, thus marking the membrane for dissolution and simulating that the membrane is dissolved together with its timer in the system of mutual mobile membranes with timers. We are now able to prove the computational power of systems of mutual mobile membranes with timers. We denote by NtM Mm (mutual endo, mutual exo) the family of sets of natural numbers generated by systems of m ≥ 1 mutual mobile membranes with timers by using mutual endocytosis and mutual exocytosis rules. We also denote by NRE the family of all sets of natural numbers generated by arbitrary grammars. Proposition 3.41 ([AC13]). NtM M3 (mutual endo, mutual exo) = NRE. Proof (Sketch). Since the output of each system of mutual mobile membranes with timers can be obtained by a system of mutual mobile membranes without timers, we cannot get more than the computability power of mutual mobile membranes without timers. Therefore, according to Theorem 3.21, we have that the family NtM M3 of sets of natural numbers generated by systems of mutual mobile membranes with timers is the same as the family NRE of sets of natural number generated by arbitrary grammars. 6. Mobile Membranes and Mobile Ambients In what follows we describe a relationship between ambients and membrane systems. This relationship is mainly provided by an encoding of the safe mobile ambients into systems of mutual mobile membranes.
178 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
6.0.1. Safe Mobile Ambients. We give a short description of pure safe ambients (SA); more information can be found in [70, 71]. Given an infinite set of names N (ranged over by m, n, . . . ), we define the set A of SA-processes (denoted by A, A′ , B, B ′ , . . . ) together with their capabilities (denoted by C, C ′ , . . . ) as follows: Table 9. Safe Mobile Ambients - Syntax [70]
C A
::= ::=
in n | in n | out n | out n | open n | open n 0 | A | B | C.A | n[ A ] | (νn)A
Process 0 is an inactive mobile ambient. A movement C. A is provided by the capability C, followed by the execution of A. An ambient n[ A ] represents a bounded place labelled by n in which a SA-process A is executed. A | B is a parallel composition of mobile ambients A and B. (νn)A creates a new unique name n within the scope of A. The structural congruence ≡amb over ambients is the least congruence satisfying the following requirements: Table 10. Safe Mobile Ambients - Structural Congruence [70] (A, |, 0) is a commutative monoid; (νn)0 ≡amb 0; (νn)(νm)A ≡amb (νm)(νn)A; (νm)A ≡amb (νn)A{n/m}, where n 6∈ f n(A); (νn)(A | B) ≡amb A | (νn)B, where n 6∈ f n(A); n 6= m implies (νn)m[ A ] ≡amb m[ (νn)A ]. We define the operational semantics of pure ambient safe calculus in terms of a reduction relation ⇒amb by the following axioms and rules. By ⇒∗amb we denote a reflexive and transitive closure of the binary relation ⇒amb . We denote by A 6⇒amb if there is no ambient B such that A ⇒amb B. Table 11. Safe Mobile Ambients - Reduction Rules [AC6] Axioms: (In) n[ in m.A | A′ ] | m[ in m.B | B ′ ] ⇒amb m[ n[ A | A′ ] | B | B ′ ]; (Out) m[ n[ out m.A | A′ ] | out m.B | B ′ ] ⇒amb n[ A | A′ ] | m[ B | B ′ ]; (Open) open n.A | n[ open n.B | B ′ ] ⇒amb A | B | B ′ . Rules: ′ A ⇒amb A′ A ⇒ A amb (Comp) ; (Res) ′ ; (νn)A ⇒amb (νn)A A | B ⇒amb A′ | B A ≡ A′ , A′ ⇒amb B ′ , B ′ ≡ B A ⇒amb A′ . (Amb) ′ ; (Struc) A ⇒amb B n[ A ] ⇒amb n[ A ] The reductions are applied according to the following principles:
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
179
(1) All reductions are applied in a maximal parallel manner. The ambients, and the capabilities are chosen in a nondeterministic manner, meaning that in each step we apply a set of reductions such that no further reduction can be added to the set; (2) The ambient m from the rules above is said to be passive, while the ambient n is said to be active. The difference between active and passive ambients is that the passive ambients are not considered involved in reductions (they can be used in several reductions at the same time), while the active ambients can be used in at most one reduction; (3) An ambient is moved as a whole. In [AC6] we have defined the formal notion of deadlock for safe ambients. In order to do this, we first defined the set T A of top ambients, and the set T C of top capabilities: Top capabilities Top ambients 1. T A(0) = ∅ 1. T C(0) = ∅ 2. T A(n[ ]) = n 2. T C(n[ ]) = ∅ 3. T A(cap n. A) = ∅ 3. T C(cap n. A) = cap n 4. T A(A | B) = T A(A) ∪ T A(B) 4. T C(A | B) = T C(A) ∪ T C(B) 5. T A((νn)A) = T A(A) 5. T C((νn)A) = T C(A), where A, B ∈ A, and cap stands for in, in, out, out, open or open. The deadlock for an ambient structure is given by a predicate: true if A 6⇒amb deadlockamb (A) = false otherwise.
Deadlock can also be expressed by a set Damb of ambients which cannot evolve anymore. It is enough to describe such a set Damb . This is done by the following rules: A∈A A ∈ Damb A ∈ Damb , T A(A) = ∅ − 2. 3. 4. 1. 0 ∈ Damb cap n. A ∈ Damb (νn)A ∈ Damb n[ A ] ∈ Damb A ∈ Damb ; T A(A) 6= ∅; open nj | A ⇒amb A1 | ... | A′j | ... | Ak , out n ∈ / T C(A′j ), for all nj ∈ T A(A) 5. n[ A ] ∈ Damb A1 , A2 ∈ Damb ; open nsk ∈ / T C(At ), open ntk | Atk ⇒amb A′tk , ′t s / T C(Ak ), for all ntk ∈ T A(At ), for all nsk ∈ T A(As ), in nk ∈ 6. s = 1, 2, t = 1, 2, t 6= s A1 | A2 ∈ Damb We explain in detail the rules 4, 5 and 6. 4. If A ∈ Damb does not contain any top ambients (T A(A) = ∅), then A = A1 | . . . | Ak where each Aj = cap nj . A′j , j = 1, k. It follows that n[ A ] ∈ Damb for an arbitrary label n because no capability cap nj can be consumed.
180 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
5. On the other hand, if A ∈ Damb contains top ambients (T A(A) 6= ∅), then A can be written as A1 | . . . | Ak where Aj = nj [ A′j ] or Aj = cap nj . A′j , j = 1, k. For an arbitrary label n, n[ A ] ∈ Damb if for all / T C(A′j ). nj ∈ T A(A) such that Aj = nj [ A′j ] we have out n ∈ 6. Even if A1 , A2 ∈ Damb , it is possible to get A1 | A2 ∈ / Damb . Essentially, this can happen because k[ in m ] | m[ ] ∈ / Damb and open k | k[ ] ∈ / Damb . We consider A1 , A2 ∈ Damb with A1 = A11 | . . . | A1i and 1 1 ′1 A2 = A21 | . . . | A2j where A1k = n1k [ A′1 k ] or Ak = cap nk . Ak , k = 1, i, and 2 2 ′2 A2k = n2k [ A′2 k ] or Ak = cap nk . Al , k = 1, j. We have A1 | A2 ∈ Damb if for t t each nk ∈ T A(At ) such that Ak = ntk [ A′tk ], A′tk does not contain a capability which allows the ambient ntk to enter into an ambient (in nsk ∈ / T C(A′tk )) from the other As (nsk ∈ T A(As )), and for each Atk = cap ntk . A′tk , cap ntk does not open an ambient (open nsk ∈ / T C(At )) from the other As , where s = 1, 2, t = 1, 2 and s 6= t. Using the notions introduced by Cardelli and Gordon in [18], we say that an SA-process A exhibits an ambient labelled by n, and write A ↓amb n, whenever A is an SA-process containing a top level ambient labelled by n. We say that an SA-process A eventually exhibits an ambient labelled by n, and write A ⇓amb n, whenever after some number of reductions A exhibits an ambient labelled by n. Formally, we define: def
A ↓amb n = A ≡amb (νm1 ) . . . (νmi )n[ A′ ]A”, where n ∈ / {m1 , . . . , mi }. def
A ⇓amb n = A ⇒∗amb B and B ↓amb n.
Lemma 3.42 ([18]). If A ≡amb B and B ↓amb n, then A ↓amb n. Lemma 3.43 ([18]). If A ≡amb B and B ⇓amb n, then A ⇓amb n. Following [18] and [75], a contextual equivalence is a binary relation over ambients defined in terms of A ⇓amb n. A context Camb is an ambient containing zero or more holes; for any ambient A, Camb (A) is the ambient obtained by filling each hole in Camb with a copy of A. The contextual equivalence A ≃amb B between two ambients A and B is defined by: def
A ≃amb B = for all n and Camb , Camb (A) ⇓amb n iff Camb (B) ⇓amb n.
6.1. Relationship of Safe Mobile Ambients with Mutual Mobile Membranes. In [[AC1]] we present the following translation steps: * every safe process 0 is replaced by the empty multiset λ; * every ambient n is translated into a membrane labelled by n; * every capability cap n is translated both into an object “cap n” and into a membrane labelled by “cap n”, both placed in the same region; * every path of capabilities is translated into a nested structure of membranes (e.g., in m. out n is translated into in m [ out n [ ]out n ]in m );
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
181
* an object dlock is placed near the membrane structure after all the translation is done; the additional object dlock prevents the consumption of capability objects in a membrane system which corresponds to a mobile ambient from the set Damb . A feature of pure safe ambients is that they have a spatial tree like structure. The nodes in this structure are represented by ambients and capabilities. When translating a pure safe ambient into membrane systems we obtain the same tree structure by means of membranes: every node is a membrane having the same label as the corresponding ambient or capability. Let us consider the following ambient n[ in m | t[ ] ] | m[ in m ]; translating it into a membrane system, we obtain [ in m [ ]in m [ ]t ]n [ in m [ ]in m ]m . Remark 3.44 ([AC1]). Whenever we encode a path of capabilities, we should preserve the order in which the capabilities are consumed. This order is preserved by the translation given above, even it requires lots of resources. Another solution is to encode every capability only into an object, and to preserve the order of the objects by adding extra objects and rules into the system. This should be done by introducing objects able to enchain a certain sequence of rules: for instance, if we have in n. in m . . . then in the corresponding membrane system we have the rules: in n → in n x, in n x → in m y, . . .
Cardelli and Gordon use in [18] the following structure p[ succ[ open op ] ] | open q.open p.P | op[ in succ.in p.in succ. (q[ out succ.out succ.out p ] | open op) ] Starting from such a structure, we look for a translation such that the capabilities order is preserved. For every consumed pair of capabilities in safe ambients, there is a change in the ambient structure. We simulate this with the help of some special developmental rules in membrane systems. An object one is used to ensure that no more than one pair (capability, cocapability) is consumed at every tick of the universal clock. Using rules for moving a membrane as in [63, 67], we define the following developmental rules: a) [ in m dlock one ]n [ in m ]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m If a membrane n (containing the objects in m, dlock, one) and a membrane m (containing an object in m) are placed in the same region, then the objects in m and one are replaced by the objects in∗ m and in∗ m; the object in m is replaced by the object in∗ m. The object in∗ m is used to control the process of introducing membrane n into membrane m, and the objects in∗ m, in∗ m are used to dissolve the membranes in m and in m. b) cap∗ m [ ]cap m → [ δ ]cap m If an object cap∗ m and the membrane labelled by cap m are placed in the same region, then the object cap∗ m is consumed and the membrane labelled by cap m is dissolved (this is denoted by
182 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
c)
d)
e)
f)
g)
h)
the symbol δ). This rule simulates the consumption of a capability cap m in ambients. [ in∗ m ]n [ ]m → [ [ ]n ]m |[ ¬cap∗ ]m If an elementary membrane n (containing an object in∗ m) and a membrane labelled by m (which does not contain star objects – this is denoted by |[ ¬cap∗ ]m ) are placed in the same membrane, then the membrane n enters the membrane labelled by m and the object in∗ m which is consumed in this process. The | operator is used to denote the fact that the rule from its left hand side can be applied only if the conditions from the right hand side are satisfied. out m [ out m dlock one ]n ]m → [ [ out∗ m [ out∗ m out∗ m dlock ]n ]m If a membrane m contains both an object out m and a membrane n (having the objects out m, dlock, one), then the objects out m and one are replaced by out∗ m, out∗ m; moreover, out m is replaced by the object out∗ m. The object out∗ m is used to control the process of extracting membrane n from membrane m, and the objects out∗ m, out∗ m are used to dissolve the membranes out m and out m, respectively. [ [ out∗ m ]n ]m → [ ]n [ ]m If a membrane m contains an elementary membrane n which has an object out∗ m, then membrane n is extracted from the membrane labelled by m, and object out∗ m is consumed in this process. [open m]m open m dlock one → [ δ ]m open∗ m open∗ m dlock If a membrane m and the objects open m, dlock, one are placed inside the same region, then membrane m is dissolved, the object open m is consumed, and the objects open m and one are replaced by the objects open∗ m and open∗ m. [ U ∗ [ ]t ]n → [ U ∗ [out∗ n in∗ n U ∗ ]t ]n |[ ¬cap∗ ]t U* is used to denote an arbitrary set of star objects placed in membrane n. If a membrane n contains a set of star objects U ∗ and a membrane t which does not contain star objects (this is denoted by |[ ¬cap∗ ]t ), then a copy of set U ∗ and two new objects in∗ n and out∗ n are created inside membrane t. The existence of a set U ∗ of star objects indicates that membrane n can be used by rules c), e) to enter/exit into/from another membrane. In order to move, membrane n must be elementary; to accomplish this, the objects out∗ n, in∗ n and a copy of the set U ∗ are created inside membrane t such that membrane t can be extracted. After membrane n completes its movement (this is denoted by the fact that membrane labelled by n does not contain star objects), membrane t is introduced back into membrane n. dlock [ ]n → dlock [ dlock ]n | ¬n [ ¬dlock ]n If an object dlock and a membrane n (which does not already contain an object dlock, i.e., [ ¬dlock ]n ) are placed inside the
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
183
same region, and there is no object n placed in that region (denoted by ¬n), then a new object dlock is placed inside membrane n. This rule specifies the fact that object dlock can only pass through membranes corresponding to translated ambients; this makes impossible the consumption of capability objects from the translated structures from Damb . i) [ dlock ]n → [ ]n The object dlock created by a rule h and located inside membrane n is removed. j) [ dlock ]n → [ dlock one ]n If a membrane n contains an object dlock, then an additional object one is created in membrane n. k) one → [ δ ] An object one is consumed; the last two rules ensure that at most one object one exists in the membrane system at any moment. Remark 3.45 ([AC1]). Whenever we get the membrane system [ in m [ ]in m out n [ ]out n [ out n [ ]out n ]t ]n [ in m [ ]in m ]m after applying in a maximally parallel manner the developmental rules from the set defined above, we could obtain either the membrane system [ [ ]n [ ]t ]m or [ ]t [ [ ]n ]m . The order in which the pairs of objects corresponding to translated capabilities are consumed in membrane encoding should be the same as the order in which the pairs of capabilities are consumed in the encoded ambient. However in the example above this order cannot be established; the non-star objects can be consumed by two rules applied in parallel. For this reason we have imposed the following priorities between the developmental rules defined above: b) > c), e), g) > a), d), f ) > k) > h), i), j) According to these priorities, the membrane system [ in m [ ]in m out n [ ]out n [ out n [ ]out n ]t ]n [ in m [ ]in m ]m evolves only to the membrane system [ [ ]n [ ]t ]m if the objects in m and in m are consumed before the objects out n and out n by a rule from the set given above. The applied rules are: r1 : dlock [ ]n → dlock [ dlock ]n — type k) — a copy of the object dlock is created inside the membrane n, which does not contain a dlock object; r2 : dlock [ ]t → dlock [ dlock ]t — type k) — a copy of the object dlock is created inside the membrane t, which does not contain a dlock object; r3 : [ dlock ]n → [ dlock one ]n — type j) — an object one is created in the membrane n which contains a dlock object; r4 : [ in m dlock one ]n [ in m ]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m — type a) — membrane n (containing the objects in m, dlock and one) is allowed to enter membrane m (containing the object in m);
184 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
r5 : r6 :
r7 :
r8 :
r9 : r10 : r11 : r12 : r13 : r14 : r15 : r16 : r17 :
the objects in m, one are replaced by the objects in∗ m, in∗ m, and the object in m is replaced by the object in∗ m; in∗ m [ ]in m → [ δ ]in m — type b) — in the presence of an object in∗ m the membrane labelled by in m is dissolved; the object in∗ m signals the fact that the object in m has been consumed; in∗ m [ ]in m → [ δ ]in m — type b) — in the presence of an object in∗ m the membrane labelled by in m is dissolved; the object in∗ m signals the fact that the object in m has been consumed; [ in∗ m [ ]t ]n → [ in∗ m [ in∗ m out∗ n in∗ n ]t ]n — type g) — in the presence of star objects in membrane n (which is not an elementary one), a copy of all the star objects from membrane n and two new objects out∗ n, in∗ n are created in the nested membrane t; [ in∗ m out∗ n in∗ n [ ]out n ]t → ∗ ∗ ∗ ∗ ∗ [ in m out n in n [ in m out n in∗ n in∗ t out∗ t ]out n ]t — type g) — in the presence of star objects in membrane t (which is not an elementary one), a copy of all the star objects from membrane t and two new objects out∗ t, in∗ t are created in the nested membrane out n; [ [ out∗ t ]out n ]t → [ ]t [ ]out n — type e) — the membrane out n, being elementary and containing the object out∗ t, is extracted from membrane t; [ [ out∗ n ]out n ]n → [ ]n [ ]out n — type e) — the membrane out n, being elementary and containing an object out∗ n, is extracted from membrane n; [ [ out∗ n ]t ]n → [ ]n [ ]t — type e) — the membrane t, being elementary and containing an object out∗ n, is extracted from membrane n; [ in∗ m ]n [ ]m → [ [ ]n ]m — type c) — the membrane n, being elementary and containing an object in∗ m, is introduced in membrane m which does not contain any star objects; [ in∗ m ]t [ ]m → [ [ ]t ]m — type c) — the membrane t, being elementary and containing an object in∗ m, is introduced in membrane m which does not contain any star objects; [ in∗ m ]out n [ ]m → [ [ ]out n ]m — type c) — the membrane out n, being elementary and containing an object in∗ m, is introduced in membrane m which does not contain any star objects; [ in∗ n ]t [ ]n → [ [ ]t ]n — type c) — the membrane t, being elementary and containing an object in∗ n, is introduced in membrane n which does not contain any star objects; [ in∗ n ]out n [ ]n → [ [ ]out n ]n — type c) — the membrane out n, being elementary and containing an object in∗ n, is introduced in membrane n which does not contain any star objects; [ in∗ t ]out n [ ]t → [ [ ]out n ]t — type c) — the membrane out n, being elementary and containing an object in∗ t, is introduced in membrane t which does not contain any star objects;
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
185
r18 : [ dlock ]t → [ dlock one ]t — type j) — an object one is created in membrane t which contains a dlock object; r19 : [ out n [ out n dlock one ]t ]n → [ out∗ n [ out∗ n out∗ n dlock ]t ]n — type d) — membrane t (containing the objects dlock, one, out n) can be extracted from membrane n (containing the objects out n); objects one, out n are replaced by objects out∗ n, out∗ n, and object out n is replaced by object out∗ n; r20 : out∗ n [ ]out n → [ δ ]out n — type b) — in the presence of an object out∗ n the membrane labelled by out n is dissolved; the object out∗ n signals the fact that the object out n has been consumed; r21 : out∗ n [ ]out n → [ δ ]out n — type b) — in the presence of an object out∗ n the membrane labelled by out n is dissolved; the object out∗ n signals the fact that the object out n has been consumed; r22 : [ [ out∗ n ]t ]n → [ ]t [ ]n — type e) — the membrane t, being elementary and containing an object out∗ n, is extracted from membrane n; r23 : [ dlock ]n → [ ]n — type i) — the object dlock from membrane n is consumed; r24 : [ dlock ]t → [ ]t — type i) — the object dlock from membrane t is consumed; The rules are applied in the order presented above, with the additional remark that rules of the tuples (r5 , r6 ), (r10 , r11 ), (r12 , r13 , r14 ), (r15 , r16 ), (r20 , r21 ), (r22 , r24 ) can be applied in any order or in parallel. The computation stops when, after introducing all the possible objects dlock by applying rules of form h), none of the sequences of rules j), a) or j), d) or j), f ) can be applied. Remark 3.46 ([AC1]). At a certain moment, the membrane out n of the example above contains the objects in∗ m, out∗ n, in∗ n, in∗ t, out∗ t. In order to avoid an unexpected return of membrane out n in membrane t, we restrict the use of the object in∗ t by imposing the lack of star objects in the target membrane. This is enough to distinguish membrane t (having star objects) by membrane m which has not such star objects. It is worth to note that the membrane system [ in m [ ]in m out n [ ]out n [ out n [ ]out n ]t ]n [ in m [ ]in m ]m evolves to the membrane system [ ]t [ [ ]n ]m if the objects out n and out n are consumed before the objects in m and in m. We denote the membrane systems by M, M ′ , Mi , N, N ′ , and the labels of the membranes by n, m, . . .. Definition 3.47 ([AC6]). The set M of membrane configurations M is defined by M ::= λ | O | [ M ]n | (νn)M | M1 M2
where by O we denote a finite multiset of objects.
186 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
We can write O or M1 M2 omitting the surrounding membrane, because all the membrane structures are placed inside a skin membrane. Similar to the restriction operator presented in process calculi, we consider an operator (νn)M for the restriction of a name n to a membrane configuration M . In order to give a formal encoding of the pure safe ambients into simple mobile membranes, we define the following function: Definition 3.48 ([AC6]). A translation T (A) = dlock T1 (A), where T1 : A → M is if cap n[ ]cap n cap n[ T (A ) ] if 1 1 cap n if [ T1 (A1 ) ]n if T1 (A) = [ ]n (νn)T1 (A1 ) if T (A ) T (A ) if 1 1 1 2 λ if
T : A → M is given by A = cap n A = cap n. A1 A = n[ A1 ] A = n[ ] A = (νn)A1 A = A1 | A2 A=0
A membrane structure can be represented in a natural way as a Venn diagram. This makes clear the fact that the order of membrane structures and objects placed in the same region in a large membrane structure is irrelevant; what matters is the relationships between membranes and objects. A rule of the form a [ b ]n → b [ a ]n has the same meaning as any of the rules [ b ]n a → [ a ]n b, a [ b ]n → [ a ]n b, and [ b ]n a → b [ a ]n . Inspired from [17], we formally define the notion of structural congruence which clarifies this aspect, and reduces the number of rules written for a membrane system. Definition 3.49 ([AC6]). The structural congruence ≡mem over M is the smallest congruence relation satisfying: M N ≡mem N M M (N M ′ ) ≡mem (M N ) M ′ ; M λ ≡mem M (νn)(νm)M ≡mem (νm)(νn)M ; (νm)M ≡mem (νn)M {n/m}, where n is not a membrane name in M ; (νn)(N M ) ≡mem M (νn)N where n is not a membrane name in M ; n 6= m implies (νn)[ M ]m ≡mem [ (νn)M ]m . The restriction operator can float outward to extend the scope of a membrane name, and can float inward to restrict the scope of a membrane name. We deal with multisets of objects, and multisets of membranes. For example, we have [ ]n [ ]m ≡mem [ ]m [ ]n , in m [ ]n ≡mem [ ]n in m and in∗ n out∗ m ≡mem out∗ m in∗ n. Proposition 3.50. The structural congruence has the following properties: M ≡mem M ; M ≡mem N implies N ≡mem M ; M ≡mem N and N ≡mem M ′ implies M ≡mem M ′ ; M ≡mem N implies M M ′ ≡mem N M ′ ;
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
187
M ≡mem N implies M ′ M ≡mem M ′ N ; M ≡mem N implies [ M ]n ≡mem [ N ]n ; M ≡mem N implies (νn)M ≡mem (νn)N . Proposition 3.51 ([AC6]). Structurally congruent ambients are translated into structurally congruent membrane systems; moreover, structurally congruent translated membrane systems correspond to structurally congruent ambients: A ≡amb B iff T (A) ≡mem T (B). Proof. We prove that if A ≡amb B then T (A) ≡mem T (B). If A = A1 | A2 where A1 and A2 are two ambients, then from the definition of ≡amb we have that B = A2 | A1 . Using the definition of T and A = A1 | A2 we have that T (A) = dlock T1 (A1 ) T1 (A2 ). From B = A2 | A1 and the definition of T we get T (B) = dlock T (A2 ) T (A1 ). From the definition of ≡mem we get T (A) ≡mem T (B). If A = A1 | (A2 | A3 ) where A1 , A2 and A3 are three ambients, then from the definition of ≡amb we have that B = (A1 | A2 ) | A3 . Using the definition of T , and A = A1 | (A2 | A3 ), we have that T (A) = dlock T1 (A1 ) (T1 (A2 ) T1 (A3 )). From B = (A1 | A2 ) | A3 and the definition of T , we have that T (B) = dlock(T1 (A1 ) T1 (A2 )) T1 (A3 ). Using the definition of ≡mem we get T (A) ≡mem T (B). If A = A1 | 0 where A1 is a ambient, then from the definition of ≡amb we have that B = A1 . Using the definition of T , and A = A1 | 0, we have that T (A) = dlock T1 (A1 ) λ. From B = A1 and the definition of T , we have that T (B) = dlock(T1 (A1 ). Using the definition of ≡mem we get T (A) ≡mem T (B). If A = (νn)(νm)A1 where A1 is a ambient, then from the definition of ≡amb we have that B = (νm)(νn)A1 . Using the definition of T , and A = (νn)(νm)A1 , we have that T (A) = dlock(νn)(νm)T1 (A1 ). From B = (νm)(νn)A1 and the definition of T , we have that T (B) = dlock(νm)(νn) T1 (A1 ). Using the definition of ≡mem we get T (A) ≡mem T (B). If A = (νn)A1 where A1 is a ambient and n 6∈ f n(A1 ), then from the definition of ≡amb we have that B = (νm)A1 {m/n}. Using the definition of T , and A = (νn)A1 , we have that T (A) = dlock(νn)T1 (A1 ). From B = (νm)A1 {m/n} and the definition of T , we have that T (B) = dlock(νm)T1 (A1 ){m/n}. Using the definition of ≡mem we get T (A) ≡mem T (B). If A = (νn)(A1 A2 ) where A1 and A2 are two ambients and n 6∈ f n(A1 ), then from the definition of ≡amb we have that B = A1 (νn)A2 . Using the definition of T , and A = (νn)(A1 A2 ), we have that T (A) = dlock(νn) T1 (A1 ) T1 (A2 ). From B = A1 (νn)A2 and the definition of T , we have that T (B) = dlockT1 (A1 )(νn)T1 (A2 ). Using the definition of ≡mem we get T (A) ≡mem T (B).
188 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
If A = (νn)m[A1 ]) where A1 is a ambient and n 6= m, then from the definition of ≡amb we have that B = m[(νn)A1 ]. Using the definition of T , and A = dlock(νn)[T1 (A1 )], we have that T (A) = dlock(νn)[T1 (A1 )]m . From B = m[(νn)A1 ] and the definition of T , we have that T (B) = dlock[(νn)T1 (A1 )]m . Using the definition of ≡mem we get T (A) ≡mem T (B). We prove that if T (A) ≡mem T (B) then A ≡amb B. If A = A1 | A2 where A1 and A2 are two ambients, then using the definition of T we have that T (A) = dlock T1 (A1 ) T1 (A2 ). From the definition of ≡mem we get T (B) = dlock T (A2 ) T (A1 ) where B = A2 | A1 . From the definition of ≡amb we get A ≡amb B. If A = A1 | (A2 | A3 ) where A1 , A2 and A3 are three ambients, then using the definition of T we have that T (A) = dlock T1 (A1 ) (T1 (A2 ) T1 (A3 )). From the definition of ≡mem we get T (B) = dlock(T1 (A1 ) T1 (A2 )) T1 (A3 ) where B = (A1 | A2 ) | A3 . From the definition of ≡amb we get A ≡amb B. If A = A1 | 0 where A1 is a ambient, then using the definition of T we have that T (A) = dlock T1 (A1 ) λ. From the definition of ≡mem we get T (B) = dlock(T1 (A1 ) where B = A1 . From the definition of ≡amb we get A ≡amb B. If A = (νn)(νm)A1 where A1 is a ambient, then using the definition of T we have that T (A) = dlock(νn)(νm)T1 (A1 ). From the definition of ≡mem we get T (B) = dlock(νm)(νn)(T1 (A1 ) where B = (νm)(νn)A1 . From the definition of ≡amb we get A ≡amb B. If A = (νn)A1 where A1 is a ambient and n 6∈ f n(A1 ), then using the definition of T we have that T (A) = dlock(νn)T1 (A1 ). From the definition of ≡mem we get T (B) = dlock(νm)T1 (A1 ){m/n} where B = (νm)A1 {m/n}. From the definition of ≡amb we get A ≡amb B. If A = (νn)(A1 A2 ) where A1 and A2 are two ambients and n 6∈ f n(A1 ), then using the definition of T we have that T (A) = dlock(νn) T1 (A1 )T1 (A2 ). From the definition of ≡mem we get T (B) = dlockT1 (A1 )(νn)T1 (A2 ) where B = A1 (νn)A2 . From the definition of ≡amb we get A ≡amb B. If A = (νn)m[A1 ]) where A1 is a ambient and n 6= m, then using the definition of T we have that A = dlock(νn)[T1 (A1 )]. From the definition of ≡mem we get T (B) = dlock[(νn)T1 (A1 )]m where B = m[(νn)A1 ]. From the definition of ≡amb we get A ≡amb B. In [22] the authors put together the concept of “behaviour” of a biological system and the concept of “observer” to obtain a system represented in Figure 5. “Biological System” represents a mathematical model of a biological system; such a system evolves by passing from one configuration to another, and producing in this way a “behaviour”. An “Observer” is placed outside the biological system, and watches its behaviour. Similar to protein observation defined in [34], we introduce a relation called barbed bisimulation which equates systems if they are indistinguishable under certain observations. In membrane systems the observer has
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
189
Observer Observation
Biological System
Figure 5. Observer the possibility of watching only the top-level membranes at any step of the computation, where the set of top-level membranes T L is defined as follows: if M = O, then T L(M ) = ∅ if M = [ N ]n , then T L(M ) = {n}; if M = (νn)N , then T L(M ) = T L(N )\{n}; if M = M1 M2 , then T L(M ) = T L(M1 ) ∪ T L(M2 ). For the case M = (νn)N we have that T L(M ) = T L(N )\{n} because an observer does not have the power to observe the membranes with restricted names. From now on, we work with a subclass of M, namely the systems obtained from the translation of safe ambients. Representing by r one of the rules a), . . . , k) from our particular set of developmental rules, we use r M → N to denote the transformation of a membrane system M into a membrane system N by applying a rule r. Similar to Chapter 1 where a structural operational semantics for a particular class of P systems was defined, we can define the corresponding relation ⇒mem . Considering two membrane systems M and N with only one object dlock, we say that M ⇒mem N if there r r1 is a sequence of rules r1 , . . . , ri such that M → . . . →i N . The operational semantics of the membrane systems is defined in terms of the transformation r relation → by the following rules: r (DRule) M → N for each developmental rule a), . . . , k) r r M → M′ ; M → M′ ; (Comp) (Res) r r (νn)M → (νn)M ′ M N → M′ N r r M ≡mem M ′ , M ′ → N ′ N ′ ≡mem N M → M′ (Amb) ; (Struc) r r [ M ]n → [ M ′ ]n M → N . The key ingredient of the barbed bisimulation is the notion of barb. A barb is a predicate which describes the observed elements of a certain structure. Definition 3.52 ([AC6]). A barb ↓mem following rules: M ↓mem n M1 · · · Mk ↓mem n1 · · · nk (νk)M ↓mem n
is defined inductively by the if n ∈ T L(M ) if ni ∈ T L(Mj ) 1 ≤ i, j ≤ k if k 6= n
190 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI ′ ′ mem n. We write M ⇓mem n if either M ↓mem n or M ⇒+ mem M and M ↓ Formally, we have: def
M ↓mem n = M ≡mem (νm1 ) . . . (νmi )[ M1 ]n M2 , and n ∈ / {m1 , . . . , mi }. def
′ ′ mem n. M ⇓mem n = either M ↓mem n or M ⇒+ mem M and M ↓ The following result reflects a relationship between structural congruence and barbs predicates.
Proposition 3.53 ([AC6]). Structurally congruent membrane systems have the same top level membranes. If M ≡mem N , then M ↓mem n iff N ↓mem n, for all n ∈ T L(M, N ).
Proof. M ↓mem n means that M ≡mem (νm1 ) . . . (νmi )[ M1 ]n M2 , where n is a label different of m1 , . . . , mi . From M ≡mem (νm1 ) . . . (νmi )[ M1 ]n M2 and M ≡mem N , we get N ≡mem (νm1 ) . . . (νmi ) [ M1 ]n M2 , which means that N ↓mem n.
The set of membrane labels M L is defined as follows: if M = O, then M L(M ) = ∅; if M = [ N ]n , then M L(M ) = M L(N ) ∪ {n}; if M = (νn)N , then M L(M ) = M L(N ); if M = M1 M2 , then M L(M ) = M L(M1 ) ∪ M L(M2 ). If a system contains a top level membrane after applying a number of computation steps, then a structurally congruent membrane system contains the same top level membrane after applying the same number of computation steps. Proposition 3.54 ([AC6]). If M ≡mem N , then M ⇓mem n iff N ⇓mem n for all n ∈ M L(M, N ). Proof. We prove only the first implication, the other being treated similarly by switching M and N . ′ ′ mem n. If M ⇓mem n, then either M ↓mem n or M ⇒+ mem M and M ↓ The first case was studied in the previous proposition, so only the second ′ ′ case is presented. If M ⇒+ mem M and M ≡mem N , then there exists N such ′ ′ ′ ′ ′ ′ mem + n that N ⇒mem N and M ≡mem N . From M ≡mem N and M ↓ ′ ′ mem + we have that N ↓ n, which together with N ⇒mem N implies that N ⇓mem n. Proposition 3.55 ([AC6]). An ambient contains a top ambient labelled by n if and only if the translated membrane system contains a top level membrane labelled by n. Formally, A ↓amb n iff T (A) ↓mem n for all n ∈ T L(T (A)). Proof. We prove only the first implication, the other being treated similarly by switching M and N . If A ↓amb n, then we have A1 and A2 such that A = (νm1 ) . . . (νmi )n[ A1 ] | A2 , where n ∈ / {m1 , . . . , mi }. From A =
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
191
(νm1 ) . . . (νmi )n[ A1 ] | A2 and the definition of the translation function we have that T (A) = (νm1 ) . . . (νmi )dlock [ T1 (A1 ) ]n T1 (A2 ), which means that T (A) ↓mem n. Proposition 3.56 ([AC6]). An ambient eventually contains a top ambient n if and only if the translated membrane system, after applying the same number of steps, eventually contains a top level membrane n. Formally, A ⇓amb n iff T (A) ⇓mem n for all n ∈ M L(T (A)). Proof. We prove only the first implication, the other being treated similarly by switching M and N . amb n. The If A ⇓amb n, then either A ↓amb n or A ⇒+ amb B and B ↓ first case was studied in the previous proposition, so only the second case amb n then + is presented. If A ⇒+ amb B then T (A) ⇒mem T (B). From B ↓ mem according to the previous proposition we have that T (B) ↓ n. From mem n. T (B), we get that T (A) ⇓ T (B) ↓mem n and T (A) ⇒+ mem We consider that two membrane configurations are similar if they behave in the same way when they are placed in the same (arbitrary) context. We define a contextual bisimulation as in [69] . Considering a pair (M ; N ) of membrane systems, we construct all the possible contexts pairs (Cmem (M ); Cmem (N )) using the following recursive definition: (Cmem (M ); Cmem (N )) = (M ; N ) | ([ Cmem (M ) ]n ; [ Cmem (N ) ]n ) | ((νn)Cmem (M ); (νn)Cmem (N )) | (Cmem (M ) M ′ ; Cmem (N ) M ′ ) | (M ′ Cmem (M ); M ′ Cmem (N )). We define a contextual equivalence ≃mem over membrane systems by def
M ≃mem N = for all n and for all the pairs (Cmem (M ); Cmem (N )), Cmem (M ) ⇓mem n iff Cmem (N ) ⇓mem n.
Proposition 3.57 ([AC6]). If M ≡mem N then M ≃mem N. Proof. Consider an arbitrary pair (Cmem (M ); Cmem (N )), and a name n such that Cmem (M ) ⇓mem n. We show that Cmem (N ) ⇓mem n. Cmem (M ) ≡mem Cmem (N ) is proved by induction on the size of Cmem (M ) and Cmem (N ). According to Proposition 3.54 we have that Cmem (M ) ⇓mem n implies Cmem (N ) ⇓mem n; the other implication is proved by switching M and N . Proposition 3.58 ([AC6]). If T1 (A) ≃mem T1 (B) then A ≃amb B. Proof. T1 (A) ≃mem T1 (B) means that for all n and for all pairs (Cmem (T1 (A)); Cmem (T1 (B))) we have that Cmem (T1 (A)) ⇓mem n iff Cmem (T1 (B)) ⇓mem n. For all the contexts, the pair (Camb (A); Camb (B)) is translated in a pair (T (Camb (A)); T (Camb (B))). By applying the translation function in the second pair, we obtain the pair (Cmem (T1 (A)); Cmem (T1 (B))), where Cmem corresponds to Camb by translation. We have Cmem (T1 (A)) ⇓mem
n ⇔ T (Camb (A)) ⇓mem n
P rop.3.56
⇔
Camb (A) ⇓amb n.
192 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI P rop.3.56
Similarly Cmem (T1 (B)) ⇓mem n ⇔ T (Camb (B)) ⇓mem n ⇔ Camb (B) ⇓amb n. It follows that Camb (A) ⇓amb n ⇔ Camb (B) ⇓amb n that implies A ≃amb B. Remark 3.59 ([AC6]). A ≃amb B does not necessarily imply that T1 (A) ≃mem T1 (B), because the translated contexts from mobile ambients do not represent all the contexts from membrane systems (the set of contexts in membrane systems is larger than the set of translated contexts). A ≃amb B implies that for all n ∈ M L(T (A | B)) and for all Camb we have Camb (A) ⇓amb n ⇔ Camb (B) ⇓amb n. According to Proposition 3.56, we have T (Camb (A)) ⇓mem n ⇔ T (Camb (B)) ⇓mem n. We should check that Cmem (T1 (A)) ⇓mem n ⇔ Cmem (T1 (B)) ⇓mem n for all the contexts (not only for a particular set) in order to have T1 (A) ≃mem T1 (B), and in general this cannot be done. The deadlock for a membrane system is defined in [AC6] as a predicate: true if M ∈ Dmem deadlockmem (M ) = false if M ∈ / Dmem , where Dmem is a set of membrane systems in which, after introducing all the possible objects dlock by applying rules of form h), none of the sequences of rules j), a) or j), d) or j), f ) can be applied. The next result relates the two notions of deadlock (in ambients and in membrane systems) through the defined translation function. Proposition 3.60 ([AC6]). A ∈ Damb iff T (A) ∈ Dmem , and deadlockamb (A) = deadlockmem (T (A)). Proof. T (A) ∈ Dmem means that after applying all the possible rules of type h), none of the rules of type a), d) or f ) can be applied. This is equivalent with the fact that no object corresponding to a translated capability is consumed, and from the definition of the object dlock this means that the translated ambient is a deadlock, so A ∈ Damb . We demonstrate the other implication by checking all the rules from the definition of Damb , and we proceed using structural induction. If A = 0 then T (A) = dlock. No rule of the type h), a), d) or f ) can be applied which means that T (A) ∈ Dmem . If A = cap n. A1 then T (A) = dlock cap n [ T1 (A1 ) ]cap n . No rule of the type h), a), d) or f ) can be applied which means that T (A) ∈ Dmem . If A = n[ A1 ], A1 ∈ Damb , and T A(A1 ) = ∅ then we have that A1 = ′ A1 | . . . | A′k where each A′i = cap ni . A′′i , f or all i ∈ {1, . . . , k}. This means that T (A) = dlock [ cap n1 [ T1 (A′′1 ) ]cap n1 . . . cap nk [ T1 (A′′k ) ]cap nk ]n . Only one rule of type h) can be applied; in the resulting configuration we cannot apply a rule of type a), d) or f ), which means that T (A) ∈ Dmem . If A = n[ A1 ], A1 ∈ Damb , T A(A1 ) 6= ∅, open m | / T C(A′′m ), for all m ∈ T A(A1 ) then we A1 ⇒amb A′′m and out n ∈ ′ have that A1 = A1 | . . . | A′k , where each A′i = cap ni . A′′i or
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
193
A′i = ni [ A′′i ] for all i ∈ {1, . . . , k}, and it results that T (A) = dlock [ cap ni1 [ T1 (A′′i1 ) ]cap ni1 . . . cap nis [ T1 (A′′is ) ]cap nis [ T1 (A′′j1 ) ]nj1 . . . [ T1 (A′′jt ) ]njt ]n . After applying all the possible rules of type h), and using the fact that the membrane systems T1 (A′′j1 ) . . . T1 (A′′jt ) do not contain the object out n and T1 (A1 ) ∈ Dmem , in the resulting configuration we cannot apply a rule of type a), d) or f ), which means that T (A) ∈ Dmem . If A = A1 | A2 , A1 , A2 ∈ Damb , open m ∈ / T C(Ai ), open k | Ai ⇒amb A′′i , in m ∈ / T C(A′′i ) for all k ∈ T A(Ai ), for all m ∈ T A(Aj ), i 6= j, i, j = 1, 2 then we have that A = A′1 | . . . | A′k , where each A′i = cap ni . A′′i or A′i = ni [ A′′i ] for all i ∈ {1, . . . , k}, and it results that T (A) = dlock cap ni1 [ T1 (A′′i1 ) ]cap ni1 . . . cap nis [ T1 (A′′is ) ]cap nis [ T1 (A′′j1 ) ]nj1 . . . [ T1 (A′′jt ) ]njt . After applying all the possible rules of type h), using the fact that the membrane system Ta (A′′jd ), d = 1, t, does not contain the objects in njc , c = 1, t, c 6= d, and that if cap nic = open nic then nic 6= njd f or all d ∈ {1 . . . t}, and Ti (Ai ) ∈ Dmem , i = 1, 2, in the resulting configuration we cannot apply a rule of type a), d) or f ), which means that T (A) ∈ Dmem . Proposition 3.61 ([AC6]). If A and B are two ambients and M is a membrane system such that A ⇒amb B and M = T (A), then there exists a rk r1 chain of transitions M → ... → N such that r1 , . . . , rk are developmental rules, and N = T (B).
Proof. Since A ⇒amb B, then one of the requirements In, Out or Open is fulfilled for ambients A′ and B ′ which are included in A and B respectively. We treat all the possible cases: 1. A′ = n[ in m ] | m[ in m ] and B ′ = m[ n[ ] ], where n is an ambient which contains only the capability in m. Then according to the definition of the translation function T , M contains the membrane structure [ in m [ ]in m ]n [ in m [ ]in m ]m , and applying some rules of form h) we obtain the following structure: [ in m dlock [ ]in m ]n [ in m [ ]in m ]m . Using the rules r1 : [ dlock ]n → [ dlock one ]n r2 : [ in m dlock one ]n [ in m ]m → [ in∗ m in∗ m dlock ]n [in∗ m]m r3 : in∗ m [ ]in m → [ δ ]in m r4 : in∗ m [ ]in m → [ δ ]in m r5 : [n in∗ m]n [m ]m → [m [n ]n ]m , and some rules of the form i) there exists the following sequence of transik)
r
r
i)
1 5 tions M →∗ M1 → . . . M4 → M6 →∗ N , where M1 , . . . , M6 are intermediary configurations, and the membrane structure N contains the membrane structure [ [ ]n ]m . Once the objects dlock and one are created near object in m, these transitions are the only deterministic steps which can be performed. We can notice that T1 (B ′ ) = [ [ ]n ]m . Hence, according to the definition of r translation function T and transition relation →, we reach the conclusion
194 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
that the membrane structure M admits the required sequence of transitions leading to the membrane structure N , and T (B) = N . 2. A′ = m[ out m n[ out m ] ] and B ′ = m[ ] n[ ], where n is an ambient which contains only the capability out m. Then according to the definition of the translation function T , M contains the membrane structure [ out m [ ]out m [ out m [ ]out m ]n ]m , and applying some rules of form h) we obtain the following structure: [ out m [ ]out m [ dlock out m [ ]out m ]n ]m . Using the rules r1 : [ dlock ]n → [ dlock one ]n r2 : [ out m [ out m dlock one]n ]m → [ out∗ m [ out∗ m out∗ m]n ]m r3 : out∗ m [ ]out m → [ δ ]out m r4 : out∗ m [ ]out m → [ δ ]out m r5 : [ [ out∗ m ]n ]m → [ ]m [ ]n , and some rules of the form i) there exists the following sequence of transik)
r
i)
r
1 5 tions M →∗ M1 → . . . M4 → M6 →∗ N , where M1 , . . . , M6 are intermediary configurations, and the membrane structure N contains the membrane structure [ ]m [ ]n . Once the objects dlock and one are created near object out m, these transitions are the only steps which can be performed. We can notice that T1 (B ′ ) = [m ]m [n ]n . Hence, according to the definition of r translation function T and transition relation →, we reach the conclusion that the membrane structure M admits the required sequence of transitions leading to the membrane structure N , and T (B) = N . 3. A′ = m[ n[ A1 open n ] open n ] and B ′ = m[ A1 ], where n is an ambient containing the ambient structure A1 . Then according to the definition of the translation function T , M contains the membrane structure [ [ T1 (A1 ) open n [ ]open n ]n open n [ ]open n ]m , and applying some rules of form h) we obtain the following structure: [ [ T1 (A1 ) open n [ ]open n ]n dlock open n [ ]open n ]m Using the rules r1 : [ dlock ]m → [ dlock one ]m r2 : [open n]n open n dlock one → [ δ ]n open∗ n open∗ n dlock r3 : open∗ n [ ]open n → [ δ ]open n r4 : open∗ n [ ]open n → [ δ ]open n and some rules of the form i) there exists the following sequence of transi-
k)
r
r
i)
1 4 tions M →∗ M1 → . . . M4 → M5 →∗ N , where M1 , . . . , M5 are intermediary configurations, and the membrane structure N contains the membrane structure [ T1 (A1 ) ]m . Once the objects dlock and one are created near object open m, these transitions are the only steps which can be performed. We can notice that T1 (B ′ ) = [ T1 (A1 ) ]m . Hence, according to the definition of r translation function T and transition relation →, we reach the conclusion that the membrane structure M admits the required sequence of transitions leading to the membrane structure N , and T (B) = N .
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
195
4. A′ = n[ in m . . .] | m[ in m ] and B ′ = m[ n[ ] ], where n is an ambient which contains the capability in m and another capabilities or ambients. We treat only the case with A′ = n[ in m t[ ] ] | m[ in m ] and B ′ = m[ n[ t[ ] ] ], where t is an empty membrane and give some ideas on how to treat the cases with a more nested structure for n. Then according to the definition of the translation function T , M contains the membrane structure [ in m [ ]in m [ ]t ]n [ in m [ ]in m ]m , and applying some rules of form h) we obtain the following structure: [ in m dlock [ ]in m [ ]t ]n [ in m [ ]in m ]m . Using the rules r1 : [ dlock ]n → [ dlock one ]n r2 : [ in m dlock one ]n [ in m ]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m r3 : in∗ m [ ]in m → [ δ ]in m r4 : in∗ m [ ]in m → [ δ ]in m r5 : [ in∗ m [ ]t ]n → [ in∗ m [ in∗ m out∗ n in∗ n ]t ]n r6 : [ [ out∗ n ]t ]n → [ ]n [ ]t r7 : [ in∗ m ]n [ ]m → [ [ ]n ]m r8 : [ in∗ m ]t [ ]m → [ [ ]t ]m r9 : [ in∗ n ]t [ ]n → [ [ ]t ]n , and some rules of the form i) there exists the following sequence of transik)
r
r
i)
1 9 tions M →∗ M1 → . . . M4 → M10 →∗ N , where M1 , . . . , M10 are intermediary configurations, and the membrane structure N contains the membrane structure [ [ [ ]t ]n ]m . Once the objects dlock and one are created near object in m, these transitions are the only steps which can be performed. We can notice that T1 (B ′ ) = [ [ [ ]t ]n ]m . Hence, according to the definition r of translation function T and transition relation →, we reach the conclusion that the membrane structure M admits the required sequence of transitions leading to the membrane structure N , and T (B) = N . If the number of membranes nested in the ambient n is more than one or we have more capabilities, the number of rules applied increases, but the result is the same: the membrane n is transformed into an elementary membrane, it is introduced in m, where it regains the same nested structure, all this process being controlled by the objects created by the sequence of rules applied. The process stops when all the star objects are consumed and there is only one dlock object in the membrane system. 5. A′ = m[ n[ out m . . .] out m ] and B ′ = m[ ] n[ ], where n is an ambient which contains the capability out m and another capabilities or ambients. We treat only the case with A′ = m[ n[ out m t[ ] out m ] | m[ ] and B ′ = n[ t[ ] ] m[ ], where t is an empty membrane and give some ideas on how to treat the cases with a more nested structure for n. Then according to the definition of the translation function T , M contains the membrane structure [ [ out m [ ]out m [ ]t ]n out m [ ]out m ]m ,
196 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
and applying some rules of form h) we obtain the following structure: [ [ dlock out m [ ]out m [ ]t ]n out m [ ]out m ]m . Using the rules r1 : [ dlock ]m → [ dlock one ]m r2 : [ [ out m dlock one ]n out m ]m → [ [ out∗ m out∗ m dlock ]n out∗ m ]m r3 : out∗ m [ ]out m → [ δ ]out m r4 : out∗ m [ ]out m → [ δ ]out m r5 : [ out∗ m [ ]t ]n → [ out∗ m [ out∗ m out∗ n in∗ n ]t ]n r6 : [ [ out∗ n ]t ]n → [ ]n [ ]t r7 : [ [ out∗ m ]n ]m → [ ]n [ ]m r8 : [ [ out∗ m ]t ]m → [ ]t [ ]m r9 : [ in∗ n ]t [ ]n → [ [ ]t ]n , and some rules of the form i) there exists the following sequence of transik)
r
r
i)
1 9 tions M →∗ M1 → . . . M4 → M10 →∗ N , where M1 , . . . , M10 are intermediary configurations, and the membrane structure N contains the membrane structure [ ]m [ [ ]t ]n . Once the objects dlock and one are created near object out m was selected, these transitions are the only steps which can be performed. We can notice that T1 (B ′ ) = [ ]m [ [ ]t ]n . Hence, according to r the definition of translation function T and transition relation →, we reach the conclusion that the membrane structure M admits the required sequence of transitions leading to the membrane structure N , and T (B) = N . If the number of membranes nested in the ambient n is more than one or we have more capabilities, the number of rules applied increases, but the result is the same: the membrane n is transformed into an elementary membrane, it is extracted from m, then it regains the same nested structure, all this process being controlled by the objects created by the sequence of rules applied. The process stops when all the star objects are consumed.
Proposition 3.62 ([AC6]). Let M and N be two membrane systems with only one dlock object, and an ambient A such that M = T (A). If there rk r1 is a sequence of transitions M → ... → N , then there exists an ambient ∗ B with A ⇒amb B and N = T (B). The number of pairs of non-star objects consumed in membrane systems is equal with the number of pairs of capabilities consumed in ambients. Proof. We proceed by structural induction. Since M does not contain any star object, the first rule which consumes a translated capability object has one of the following forms: • [ in m dlock one ]n [in m]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m , out m ]m → • [ [ out m dlock one ]n [ [ out∗ m out∗ m dlock ]n out∗ m ]m , • [ open m ]m open m dlock one → [ δ ]m dlock open∗ m open∗ m. We treat all the possible cases: 1. If the first rule applied is [ in m dlock one ]n [in m]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
197
where the membrane n contains only the capability object in m and the corresponding membrane labelled in m, then M contains the membrane structure [ in m [ ]in m ]n [ in m [ ]in m ]m . According to the definition of T , M can be written as M1 M ′ or M2 [ M ′ ], where M ′ = [ in m [ ]in m ]n [ in m [ ]in m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = n[ in m ] | m[ in m ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by M = M2 [ M ′ ], then according to the definition of T it contains two ambients A′ = n[ in m ] | m[ in m ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects in∗ m, in∗ m and in∗ m control the moving of membrane n into membrane m, and are consumed by the following rules: • in∗ m [ ]in m → [ δ ]in m • in∗ m [ ]in m → [ δ ]in m • [ in∗ m ]n [ ]m → [ [ ]n ]m .
After the application of these rules, M ′ evolves to N ′ = [ [ ]n ]m . The inductive hypothesis expresses that N ′ encodes an ambient B ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability in m we have A′ ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 2. If the first rule applied is [ in m dlock one ]n [in m]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m where the membrane n contains the object in m, the corresponding membrane labelled in m and another nested membrane [ ]t , then M contains the membrane structure [ in m [ ]in m [ ]t ]n [ in m [ ]in m ]m . The cases in which the ambient n contains more capabilities or/and more nested ambients are treated using structural induction on the membrane structure. According to the definition of T , M can be written as M1 M ′ or M2 [M ′ ], where M ′ = [ in m [ ]in m [ ]t ]n [ in m [ ]in m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = n[ in m t[ ] ] | m[ in m ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by M = M2 [ M ′ ], then according to the
198 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
definition of T it contains two ambients A′ = n[ in m t[ ] ] | m[ in m ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects in∗ m, in∗ m and in∗ m control the moving of membrane n into membrane m, and are consumed by the following rules: • • • • • • •
in∗ m [ ]in m → [ δ ]in m in∗ m [ ]in m → [ δ ]in m [ in∗ m [ ]t ]n [ ]m → [ in∗ m [ in∗ m in∗ n out∗ n ]t ]n [ ]m [ [ out∗ n ]t ]n → [ ]n [ ]t [ in∗ m ]n [ ]m → [ [ ]n ]m [ in∗ m ]t [ ]m → [ [ ]t ]m [ in∗ n ]t [ ]n → [ [ ]t ]n
After the application of these rules, M ′ evolves to N ′ = [ [ [ ]t ]n ]m . The inductive hypothesis expresses that N ′ encodes an ambient B ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability in m we have A′ ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 3. If the first rule applied is [ [ out m dlock one ]n out m ]m → [ [ out∗ m out∗ m dlock ]n out∗ m ]m where the membrane n contains only the object out m and the corresponding membrane labelled out m, then M contains the membrane structure [ [ out m [ ]out m ]n out m [ ]out m ]m . According to the definition of T , M can be written as M1 M ′ or M2 [ M ′ ], where M ′ = [ [ out m [ ]out m ]n out m [ ]out m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = m[ n[ out m ] out m ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by M = M2 [ M ′ ], then according to the definition of T it contains two ambients A′ = m[ n[ out m ] out m ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects in∗ m, in∗ m and in∗ m are consumed by the following rules: • out∗ m [ ]out m → [ δ ]out m • out∗ m [ ]out m → [ δ ]out m • [ [ out∗ m ]n ]m → [ ]n [ ]m .
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
199
After the application of these rules, M ′ evolves to N ′ = [ ]n [ ]m . The inductive hypothesis expresses that N ′ encodes an ambient B ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability out m we have ′ A ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 4. If the first rule applied is [ [ out m dlock one ]n out m ]m → [ [ out∗ m out∗ m dlock ]n out∗ m ]m , where the membrane n contains only the object out m, the corresponding membrane labelled out m and another nested membrane [ ]t , then M contains the membrane structure [ [ out m [ ]out m ]n out m [ ]out m ]m . The cases in which the ambient n contains more capabilities or/and more nested ambients are treated using structural induction on the membrane structure. According to the definition of T , M can be written as M1 M ′ or M2 [ M ′ ], where M ′ = [ [ out m [ ]out m ]n out m [ ]out m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = m[ n[ out m ] out m ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by M = M2 [ M ′ ], then according to the definition of T it contains two ambients A′ = m[ n[ out m ] out m ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects in∗ m, in∗ m and in∗ m determine the application of the following rules: • • • • • • •
out∗ m [ ]out m → [ δ ]out m out∗ m [ ]out m → [ δ ]out m [ out∗ m [ ]t ]n → [ out∗ m [ out∗ m out∗ n in∗ n ]t ]n [ [ out∗ n ]t ]n → [ ]n [ ]t [ [ out∗ m ]n ]m → [ ]n [ ]m [ [ out∗ m ]t ]m → [ ]t [ ]m [ in∗ n ]t [ ]n → [ [ ]t ]n
After the application of these rules, M ′ evolves to N ′ = [ [ ]t ]n [ ]m . The inductive hypothesis expresses that N ′ encodes an ambient B ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N .
200 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
It should be noticed that by consuming the capability out m we have A′ ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 5. If the first rule applied is [ open m ]m open m dlock one → [ δ ]m dlock open∗ m open∗ m, where the membrane m contains the object open m, the corresponding membrane labelled open m and the membrane structure M1 , whereas membrane open m contains the membrane structure M2 , then M contains the membrane structure [ open m [ ]open m M1 ]m open m [open m M2 ]open m . According to the definition of T , M can be written as M3 M ′ or M4 [ M ′ ], where M ′ = [ open m [ ]open m M1 ]m open m [open m M2 ]open m and M4 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. We denote with the word path a path of capabilities encoded in the membrane structure M2 with the length greater or equal to zero and with the word nest the mobile structure contained in membrane m; we have T1 (nest) = M3 and T1 (path) = M4 . If A is a mobile ambient encoded by M = M3 M ′ , then according to the definition of T it contains two ambients A′ = m[ nest open m ] | open m.path and A3 such that A = A3 | A′ , T1 (A′ ) = M ′ , and T1 (A3 ) = M3 . If A is a mobile ambient encoded by M = M4 [ M ′ ], then according to the definition of T it contains two ambients A′ = m[ nest open m ] | open m.path and A4 such that A = A4 [ A′ ], T1 (A′ ) = M ′ , and T1 (A4 ) = M4 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects open∗ m and open∗ m are consumed by the following rules: • open∗ m [ ]open • open∗ m [ ]open
m m
→ [ δ ]open → [ δ ]open
m m
After the application of these rules, M ′ evolves to N ′ = M1 M2 . The inductive hypothesis expresses that N ′ encodes an ambient B ′ . After obtaining N ′ , N has the structure N = M3 N ′ if M = M3 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M4 [ N ′ ] if M = M4 [ M ′ ] and it encodes the mobile ambient B = A4 [B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability open m we have A′ ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 6. If the first rule applied is [ in m dlock one ]n [in m]m → [ in∗ m in∗ m dlock ]n [ in∗ m ]m where the membrane n contains the object in m, the corresponding membrane labelled in m and another nested membrane structure [. . . [ ]tn+1 . . .]t1 , then M contains the membrane structure [ in m [ ]in m [. . . [ ]tn+1 . . .]t1 ]n [ in m [ ]in m ]m .
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
201
We suppose that for membrane n with the depth lower or equal to s the proposition is true. We prove for depth s + 1. According to the definition of T , M can be written as M1 M ′ or M2 [ M ′ ], where M ′ = [ in m [ ]in m [. . . [ ]tn+1 . . .]t1 ]n [ in m [ ]in m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = n[ in m t1 [. . . ts+1 [ ] ] ] | m[ ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by M = M2 [ M ′ ], then according to the definition of T it contains two ambients A′ = n[ in m t1 [. . . ts+1 [ ] ] ] | m[ ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects in∗ m, in∗ m and in∗ m determine the application of other rules. After applying all the rules such that the object in m is consumed and no star objects exist in the membrane system, the only membrane structures modified by the application of this rules is M ′ which evolves to N ′ = [ [ [ . . . [ ]ts+1 ]t1 ]n ]m and M1 or M2 remain the same. We know, from the inductive hypothesis, that an ambient B ′′ = m[ n[ t1 [. . . ts [ ] ] ] ] is encoded into the structure N ′′ = [ [ [. . . [ ]ts ]t1 ]n ]m , and that B ′′′ = ts+1 [ ] is encoded into the structure N ′′′ = [ ]ts+1 . According to the definition of T there exist an ambient B ′ with the structure m[ n[ t1 [. . . ts+1 [ ] ] ] ] such that T1 (B ′ ) = N ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [ B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability in m we have ′ A ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. 7. If the first rule applied is [ [ out m dlock one ]n out m ]m → [ [ out∗ m out∗ m dlock ]n out∗ m ]m , where the membrane n contains only the object out m, the corresponding membrane labelled out m and another nested membrane [. . . [ ]ts+1 . . .]t1 , then M contains the membrane structure [ [ out m [ ]out m [. . . [ ]ts+1 . . .]t1 ]n out m [ ]out m ]m . We suppose that for membrane n with the depth lower or equal to s the proposition is true. We prove for depth s + 1. According to the definition of T , M can be written as M1 M ′ or M2 [ M ′ ], where M ′ = [ [ out m [ ]out m [. . . [ ]ts+1 . . . ]t1 ]n out m [ ]out m ]m and M2 represents a membrane structure in which M ′ is placed inside a nested structure of translated ambients. If A is a mobile ambient encoded by M = M1 M ′ , then according to the definition of T it contains two ambients A′ = m[ n[ out m t1 [. . . ts+1 [ ] ] ] ] and A1 such that A = A1 | A′ , T1 (A′ ) = M ′ , and T1 (A1 ) = M1 . If A is a mobile ambient encoded by
202 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
M = M2 [M ′ ], then according to the definition of T it contains two ambients A′ = m[ n[ out m t1 [. . . ts+1 [ ] ] ] ] and A2 such that A = A2 [ A′ ], T1 (A′ ) = M ′ , and T1 (A2 ) = M2 . The application of the rule defined above, to the membrane system M changes only the membrane system M ′ . The newly created objects out∗ m, out∗ m and out∗ m determine the application of other rules. After applying all the rules such that the object out m is consumed and no star objects exist in the membrane system, the only membrane structures modified by the application of this rules is M ′ which evolves to N ′ = [ ]m [ [. . . [ ]ts+1 ]t1 ]n and M1 or M2 remain the same. We know, from the inductive hypothesis, that an ambient B ′′ = m[ ] n[ t[. . . ts [ ] ] ] is encoded into the structure N ′′ = [ ]m [ [. . . [ ]ts ]t1 ]n , and that B ′′′ = ts+1 [ ] is encoded into the structure N ′′′ = [ ]ts+1 . According to the definition of T there exist an ambient B ′ with the structure m[ ]n[ t[. . . ts+1 [ ] ] ] such that T1 (B ′ ) = N ′ . After obtaining N ′ , N has the structure N = M1 N ′ if M = M1 M ′ and it encodes the mobile ambient B = A1 | B ′ , or N has the structure N = M2 [ N ′ ] if M = M2 [ M ′ ] and it encodes the mobile ambient B = A2 [ B ′ ]. The transition from M ′ to N ′ represents also the transition from M to N . It should be noticed that by consuming the capability out m we have ′ A ⇒amb B ′ . So the transition from M to N with the consumption of only one non-star object is simulated by the transition of A to B. All the sequences of rules from all the cases above are determined uniquely by the star objects and by the priorities imposed. r
r
k 1 Remark 3.63 ([AC6]). If M → ... → N , and both M and N contain only one dlock object, then the number of steps which transform ambient A in ambient B is the number of pairs of non-star objects consumed during the computation in the membrane evolution. The order in which the reductions take place in ambients is the order in which the pairs of non-star objects are consumed in the membrane systems.
Considering together the previous two propositions, we have in the following result. Theorem 3.64 (Operational correspondence [AC6]). (1) If A ⇒amb B, then T (A) ⇒mem T (B). (2) If T (A) ⇒mem M , then exists B s.t. A ⇒amb B and M = T (B). 6.2. Mobile Membranes with Timers and Timed Mobile Ambients. Since an extension with time for mobile ambients already exists [AC2, AC3, AC9], and one for mobile membranes is presented in [AC13], it is natural to study what is the relationship between these two extensions: timed safe mobile ambients and systems of mutual mobile membranes with timers.
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
203
We use P to denote the set of timed safe mobile ambients; m, n for ambient names; a, p for ambient tags (a stands for active ambients, while p stands for passive ambients), and ρ as a generic notation for both tags. We write n∆t [P ]ρ to denote an ambient having the timer ∆t and the tag ρ; the tag ρ indicates that an ambient is active or passive. An ambient n∆t [P ]ρ represents a bounded place labelled by n in which a process P is executed. The syntax of the timed safe mobile ambients is defined in Table 12. Process 0 is an inactive process (it does nothing). A movement C ∆t .P is provided by the capability C ∆t , followed by the execution of P . P | Q is a parallel composition of processes P and Q. Table 12. Syntax of Timed Safe Mobile Ambients [AC2] n, m, . . . names P, Q :: = processes C :: = capabilities 0 inactivity in n can enter an ambient n C ∆t . P movement ∆t µ out n can exit an ambient n n [P ] ambient in n allows an ambient n to enter P | Q composition out n allows an ambient n to exit In timed safe mobile ambients the capabilities and ambients are used as temporal resources; if nothing happens in a predefined interval of time, the waiting process goes to another state. The timer ∆t of each temporal resource indicates that the resource is available only for a determined period of time t. If t > 0, the ambient behaves exactly as in untimed safe mobile ambients. When the timer ∆t expires (t = 0), the ambient n is dissolved and the process P is released in the surrounding parent ambient. When we initially describe the ambients, we consider that all ambients are active, and associate the tag a to them. The passage of time is described by the discrete time progress functions φ∆ defined over the set P of timed processes. This function modifies a process accordingly with the passage of time; all the possible actions are performed at every tick of a universal clock. The function φ∆ affects the ambients and the capabilities which are not consumed. The consumed capabilities and ambients disappear together with their timers. If a capability or ambient has the timer equal to ∞ (i.e., simulating the behaviour of an untimed capability or ambient), we use the equality ∞ − 1 = ∞ when applying the function φ∆ . Another property of the time progress function φ∆ is that the passive ambients can become active at the next unit of time in order to participate to other reductions. For the process C ∆t .P the timers of P are activated only after the consumption of capability C ∆t (in at most t units of time). Reduction rules (Table 13) show how the time progress function φ∆ is used. Definition 3.65 ([AC13]). (Global time progress function) We define φ∆ : P → P, by:
204 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
∆(t−1) C .R R φ∆ (R) | φ∆ (Q) φ∆ (P ) = ∆(t−1) [φ (R)]a n ∆ R P
if if if if if if
P P P P P P
= C ∆t . R, t > 0 = C ∆t . R, t = 0 = R|Q = n∆t [R]ρ , t > 0 = n∆t [R]ρ , t = 0 =0
Processes can be grouped into equivalence classes by an equivalence relation Ξ called structural congruence which provides a way of rearranging expressions so that interacting parts can be brought together. We denote by 699K the fact that none of the rules from Table 13, except the rule (RTimePass) can be applied. The evolution of timed safe mobile ambients is given by the following reduction rules: Table 13. Reduction Rules [AC13] (R-In)
(R-Out)
− n∆t1 [in∆t2 m.P | Q]a | m∆t3 [in m
.R]µ 99K m∆t3 [n∆t1 [P | Q]p | R]µ
− ∆t4 m∆t3 [n∆t1 [out∆t2 m.P | Q]a | out m .R]µ 99K n∆t1 [P | Q]p | m∆t3 [R]µ
P 99K Q n∆t [P ]µ 99K n∆t [Q]µ P 99K Q, P ′ 99K Q′ (R-Par2) P | P ′ 99K Q | Q′ M 99K 6 (R-TimePass) M 99K φ∆ (M )
(R-Amb)
∆t4
P 99K Q P | R 99K Q | R P ′ ΞP, P 99K Q, QΞQ′ (R-Struct) P ′ 99K Q′
(R-Par1)
In the rules (R-In), (R-Out) ambient m can be passive or active, while the ambient n is active. The difference between passive and active ambients is that the passive ambients can be used in several reductions in a unit of time, while the active ambients can be used in at most one reduction in a unit of time, by consuming their capabilities. In the rules (R-In) and (R-Out) the active ambient n becomes passive, forcing it to consume only one capability in one unit of time. The ambients which are tagged as passive become active again by applying the global time function (R-TimePass). In timed safe mobile ambients, if a process evolves by one of the rules (RIn), (R-Out), while another one does not perform any reduction, then rule (R-Par1) should be applied. If more than one process evolves in parallel by applying one of the rules (R-In), (R-Out), then the rule (R-Par2) should be applied. We use the rule (R-Par2) to compose processes which are active, and the rule (R-Par1) to compose processes which are active and passive. We denote by M(Π) the set of configurations obtained along all the possible evolution of a system Π of mutual mobile membranes with timers. Definition 3.66 ([AC13]). For a system Π of mutual mobile membranes with timers, if M and N are two configurations from M(Π), we say
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
205
that M reduces to N (denoted by M → N ) if there exists a rule in the set R of Π, applicable to configuration M such that we can obtain configuration N . In order to give a formal encoding of timed safe mobile ambients into the systems of mutual mobile membranes with timers, we define the following function: Definition 3.67 ([AC13]). A translation T : P → M(Π) is given by: ∆t C T (A1 ) if A = C ∆t . A1 ∆t [ T1 (A1 ) ]n if A = n∆t [ A1 ]ρ T (A) = T (A )T (A ) if A = A1 | A2 1 1 1 2 λ if A = 0 where the system Π of mutual mobile membranes with timers is constructed as follows: Π = (V, H, µ, w1 , . . . , wn , R, T, iO ) as follows: • n ≥ 1 is the number of ambients from A; • V is an alphabet containing the C objects from T (P ); • H is a finite set of labels containing the labels of ambients from A; • µ ⊂ H × H describes the membrane structure, which is similar with the ambient structure of A; • wi ∈ V ∗ , 1 ≤ i ≤ n are multisets of objects which contain the C objects from T (A) placed inside membrane i; • T ⊆ {∆t | t ∈ N} is a multiset of timers assigned to each membrane and object; the timer of each ambient or capability from A is the same in the corresponding translated membrane or object; • iO is the output membrane - can be any membrane; • R is a finite set of developmental rules, of the following forms: ∆t4 ∆t3 ∆t3 ]m → [[ ]∆t1 (1) [in∆t2 m]∆t1 n ]m , for all n, m ∈ H and n | [in m all in m, in m ∈ V ∆t4 ∆t3 ∆t3 (2) [[out∆t2 m]∆t1 ]m → [ ]∆t1 n | out m n | [ ]m , for all n, m ∈ H and all out m, out m ∈ V When applying the translation function we do not take into account the tag ρ, since in mobile membranes a membrane is active or passive depending on the rules which are applied in an evolution step and we do not need something similar to ambients tags. Proposition 3.68 ([AC13]). If P is a timed safe mobile ambient such that P → Q, then there exists a system Π of mutual mobile membranes with timers and two configurations M, N ∈ M(Π), such that M = T (P ), M → N and N = T (Q). Proof (Sketch). The construction of Π is done following similar steps as in Definition 3.67. If P 99K Q, then there exists a rule in the set of rules R of Π such that M → N and N = T (Q).
206 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
Proposition 3.69 ([AC13]). If P is a timed safe mobile ambient, Π is a system of mutual mobile membranes with timers and M, N ∈ M(Π) are two configurations, with M = T (P ) and M → N , then there exists a timed safe mobile ambient Q such that N = T (Q).
Proof (Sketch). The system Π of mutual mobile membranes with timers is constructed in the same way as in Definition 3.67. If M → N in the Π system of mutual mobile membranes with timers, then there exist a timed safe mobile ambient Q such that N = T (Q).
Remark 3.70 ([AC13]). In Proposition 3.69 it is possible to have P 699K Q. Let us suppose that P = n∆t4 [in∆t1 m.in∆t2 k.out∆t3 s]ρ | ∆t5 m∆t6 [in m ]ρ . By translation we obtain M = [in∆t1 m in∆t2 k out∆t3 s]∆t4 n ∆t5 [in m ]∆t6 m . By constructing a system Π of mutual mobile membrane with timers as shown in Definition 3.67, we have that M, N ∈ M(Π) with ∆t6 M → N and N = [[in∆t2 k out∆t3 s]∆t4 n ]m . For this N there exists a Q = m∆t6 [n∆t4 [out∆t3 s.in∆t2 k]ρ ]ρ such that N = T (Q) but P 699K Q.
6.3. Decidability Results. Reachability is the problem of deciding whether a system may reach a given configuration during its execution. This is one of the most critical properties in the verification of systems; most of the safety properties of computing systems can be reduced to the problem of checking whether a system may reach a “unintended state”. In [AC4] we investigate the problem of reaching a certain configuration in systems of mobile membranes starting from a given configuration. We prove that reachability in systems mobile membranes can be decided by reducing it to the reachability problem of a version of pure and public ambient calculus from which the open capability has been removed. It is proven in [10] that the reachability for this fragment of ambient calculus is decidable by reducing it to marking reachability for Petri nets, which is proven to be decidable in [74]. The reachability problem is investigated in [37] for other classes of P systems, namely for extensions of PB systems with volatile membranes. When working with Petri nets, reachability is a general property of interest. Given a net with initial marking ω0 , we say that the marking ω is reachable if there exists a sequence of firings ω0 → ω1 → . . . ωn = ω of the net. The reachability problems is decidable in Petri nets, even if they tend to have a very large complexity in practice. A good survey of the known decidability issues for Petri nets is given in [39]. Since we use the reduction to mobile ambients, we construct a class of systems of mobile membranes in which the replication from mobile ambients is expressed explicitly by duplicating objects or membranes in systems of mobile membranes. Definition 3.71 ([AC4]). A system of n mobile membranes with replication rules is Q a structure = (V ∪ V , H ∪ H, µ, w1 , . . . , wn , R), where:
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
207
(1) n ≥ 1 represents the initial degree of the system; (2) V ∪V is an alphabet (its elements are called objects), where V ∩V = ∅; (3) H ∪ H is a finite set of labels for membranes, where H ∩ H = ∅; (4) µ ⊆ H × H describes the membrane structure, such that (i, j) ∈ µ denotes that the membrane labelled by j is contained into the membrane labelled by i; we distinguish the external membrane (usually called the “skin” membrane) and several internal membranes; a membrane without any other membrane inside it is said to be elementary; (5) w1 , w2 , . . . , wn are multisets of objects from V ∪ V placed in the n membranes of the system; (6) R is a finite set of developmental rules, of the following forms: (a) [a↓→ a↓ a↓]h , for h ∈ H, a↓∈ V , a↓∈ V ; replication rule The objects a↓ are used to create new objects a↓ without being consumed. (b) [a↓ a↓→ a↓]h , for h ∈ H, a↓∈ V , a↓∈ V ; consumption rule The objects a↓ are consumed. (c) [a↑→ a↑ a↑]h , for h ∈ H, a↑∈ V , a↑∈ V ; replication rule The objects a↑ are used to create new objects a↑ without being consumed. (d) [a↑ a↑→ a↑]h , for h ∈ H, a↑∈ V , a↑∈ V ; consumption rule The objects a↑ are consumed. (e) [ a↓ ]h [ ]a → [ [ ]h ]a , for a, h ∈ H, a↓∈ V ; endocytosis An elementary membrane labelled h enters the adjacent membrane labelled a (containing an object a↓). The labels h and a remain unchanged during this process; however the object a↓ is consumed during the operation. Membrane a is not necessarily elementary. (f) [ [ a↑ ]h ]a → [ ]h [ ]a , for a, h ∈ H, a↑∈ V ; exocytosis An elementary membrane labelled h is sent out of a membrane labelled a (containing an object a ↑). The labels of the two membranes remain unchanged; the object a↑ of membrane h is consumed during the operation. Membrane a is not necessarily elementary. (g) [ ]h → [ ]h [ ]h for h ∈ H, h ∈ H division rules An elementary membrane labelled h is divided into two membranes labelled by h and h having the same objects.
208 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
V ∩ V = ∅ states that the objects from V can participate only in rules of type (a) and (b). Similarly, H ∩ H = ∅ states that the membranes having labels from the set H can participate only in rules of type (g). The rules are applied using the following principles: (1) In biological systems molecules are divided into classes of different types. We make the same decision here and split the objects into four classes: a ↓ - objects which control the endocytosis, a ↑ objects which control the exocytosis, and a ↓, a ↑ - objects which produce new objects of the first two classes without being consumed. (2) All the rules are applied in parallel, non-deterministically choosing the rules, the membranes, and the objects in such a way that the parallelism is maximal; this means that in each step we apply a set of rules such that no further rule, no further membranes and objects can evolve at the same time. (3) A membrane a from each rule of type (e) and (f ) is said to be passive, while membrane h is said to be active. In any step of a computation, any object and any active membrane can be involved in at most one rule, but the passive membranes are not considered involved in the use of rules (hence they can be used by several rules at the same time). (4) When a membrane is moved across another membrane, by endocytosis or exocytosis, its whole content (its objects) is moved. (5) If a membrane is divided, then its content is replicated in the two new copies. (6) The skin membrane can never be divided. According to these rules, we get transitions among the configurations of the system. For two systems of mobile membranes M and N , we say that M reduces to N if there is a sequence of rules applicable in the system of mobile membranes M in order to obtain the system of mobile membranes N . In what follows we prove that the problem of reaching a configuration starting from a certain configuration is decidable for the systems of mobile membranes from Definition 3.71. Theorem 3.72 ([AC4]). For two arbitrary systems of mobile membranes with replication rules M1 and M2 , it is decidable whether M1 reduces to M2 . The main steps of the proof are as follows: (1) systems of mobile membranes are reduced to pure and public mobile ambients without the capability open; (2) the reachability problem for two arbitrary systems of mobile membranes can be expressed as the reachability problem for the corresponding mobile ambients.
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
209
(3) the reachability problem is decidable for a fragment of pure and public mobile ambients without the capability open. The rest of this subsection is devoted to the proof of Theorem 3.72. 6.3.1. From Mobile Membranes to Mobile Ambients. We use the following translation steps: (1) (2) (3) (4) (5) (6)
any object a↓ is translated into a capability in a; any object a↑ is translated into a capability out a; any object a↓ is translated into a replication !in a any object a↑ is translated into a replication !out a a membrane h is translated into an ambient h an elementary membrane h is translated into a replication !h[ ] where all the objects inside membrane h are translated into capabilities in ambient h using the above steps.
A correspondence exists between the rules of the systems of mobile membranes and the reduction rules of the mobile ambients as follows: - rule (c) corresponds to rule (In); - rule (d) corresponds to rule (Out); - rules (a), (b), (e) correspond to instances of rule (Repl). The rule (Repl) from mobile ambients has the form A ⇒amb !A | A. If we start with a system of mobile membranes M , we denote by T (M ) the mobile ambient obtained using the above translation steps. For example, starting from the system of mobile membranes M = [m↓ m↑]n [ ]m we obtain T (M ) = n[in m | out m] | m[ ]. Proposition 3.73 ([AC4]). For two systems of mobile membranes M and N , M reduces to N by applying one rule if and only if T (M ) reduces to T (N ) by applying only one reduction rule. Sketch. Since M reduces to N by applying one rule, then one of the rules of type (a), . . . , (e) is applied. We treat only the case when a rule of type (a) is applied, the others being treated in a similar manner. If a rule a ↓→ a ↓ a ↓ is applied, only one object from the system of mobile membranes M is used (namely a↓) to create a new object a↓, thus obtaining the system of mobile membranes N . By translating the system of mobile membranes M into T (M ), we have that a↓ is translated in !in a. By applying the reduction rule corresponding to (a) (namely the rule (Repl)) to !in m, then we have that !in a ⇒amb in a | !in a, and so a new capability in a is created. We observe that T (a↓ a↓) = !in a | in a, which means that the obtained mobile ambient is T (N ) (in fact it is structural congruent to T (N )). According to Proposition 3.73 the reachability problem for systems of mobile membranes can be reduced to a similar problem for mobile ambients.
210 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
6.3.2. From Mobile Ambients to Petri Nets. After translating the systems of mobile membranes into a fragment of mobile ambients, we present the algorithm used in [10] to translate this fragment of mobile ambients into a fragment of Petri nets which is known to be decidable from [74]. The fragment of mobile ambients used here is a subset of the fragment of mobile ambients used in [10] and the difference is provided by the extra-rule !A ⇒amb !A | !A used in [10]. We observe that applying a reduction rule over a process either increases the number of ambients or leaves it unchanged. The only reduction rule which increases the number of ambients when applied is the rule (Repl), while the other reduction rules leave the number of ambients unchanged. If we reach process B starting from process A, then the number of ambients of process B is known. Therefore, we can use this information to know how many times the reduction rule (Repl) is applied to replicate ambients. A similar argument does not hold for capabilities as they can be consumed by the reduction rules (In) and (Out). An ambient context C is a process in which some holes may occur (denoted by ). Using the ambient contexts, we split a process into two parts: one is a context containing ambients, whereas the other is a process without ambients. In order to uniquely identify all the occurrences of replication, ambient, capability or hole within an ambient context or a process, we introduce a labelling system. Using a countable set of labels, we say that a process A or an ambient context C is well-labelled if any label occurs at most once in A or C . We denote by Amb(C) the multiset of ambients occurring in an ambient context C. We say that two processes are label-free-equivalent if after removing all the labels from the two processes, they are structurally congruent. 6.3.3. I) Labelled Transition System. For the reachability problem A ⇒∗ B, we denote by CA a well-labelled ambient context, and by θA a mapping from the set of holes in CA to some labelled processes without replicable ambients such that θA (CA ) is well-labelled, and θA (CA ) = A where labels are ignored. A labelled transition system LA,B describes all possible reductions for a context CA : this includes reductions of replications and capabilities contained in CA and in the processes associated with the holes of the context. The states of the labelled transition system LA,B are associativecommutative equivalent classes of ambient contexts, and for simplicity, we often identify a state as one of the representatives of its class. We define a mapping θLA,B which extends the mapping θA . Initially, LA,B contains (the equivalence class of) CA as a unique state, and we have θLA,B = θA . We present in what follows the construction steps of θLA,B , where cap stands for in or out: (1) For any ambient context C from LA,B and for any labelled capability capw n in C, if this capability can be executed using one of the rules
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
211
(In) or (Out) leading to some ambient context C ′ , then a state C ′ and a transition from C to C ′ labelled by capw n are added to LA,B . (2) For any ambient context C from LA,B and for any labelled replication !w in C such that the reduction rule (Repl) is applied, we define the ambient context C ′ as follows: C ′ is identical to C except that the subcontext !w Ca in C is replaced by !w Ca | γ(Ca ) in C ′ ; the mapping γ relabels Ca with fresh labels, such that C ′ is well-labelled. If Amb(C ′ ) ⊆ Amb(B), then state C ′ and a transition from C to C ′ ′ labelled by !w is added to LA,B . Additionally, we define θL as A,B ′
an extension of θLA,B such that for all w in Ca we have: ′ ′ ′ (i) θL (γ(w )) and θLA,B (w ) are label-free-equivalent, A,B ′
′ (ii) labels in θL (γ(w )) are fresh in the currently built transiA,B tion system LA,B , ′ ′ (iii) θL (γ(w )) is well-labelled. A,B ′ Finally, we set θLA,B to be θL . A,B (3) For any ambient context C from LA,B , for any labelled hole w in C and for any capability capw n in the process θLA,B (w ), we consider the ambient context Cm identical to C except that w in C has been replaced by w | capw n in Cm . If the capability capw n can be consumed in Cm using one of the rules (In) or (Out) leading to an ambient context C ′ , then state C ′ and a transition from C to C ′ labelled by capw n are added to transition system LA,B . (4) For any ambient context C from LA,B and for any labelled hole w ′ in C associated by θLA,B with a process of the form !w A′ , if a repli′ cation !w can be reduced in process θLA,B (C) using rule (Repl), ′′ then a transition from C to itself labelled by !w is added to LA,B ′′ for any replication !w in θLA,B (w ).
In the second step, the reduction of a replication contained in the ambient context by means of the rule (Repl) is done only when the number of ambients in the resulting process is smaller than the number of ambients in the target process B, namely Amb(C ′ ) ⊆ Amb(B). This requirement is crucial as it implies that the transition system LA,B has only finitely many states. As an example, we give in Figure 6 the labelled transition system associated with the process n[!1 in m.!2 out m] | m[ ] (we omit in this process unnecessary labels). We use the labelled replications !1 and !2 to distinguish between different replication operators which appear in this process. We observe that the labelled transitions in LA,B for replications and capabilities of an ambient context correspond to the reductions performed over processes. As shown in steps 3 and 4, the transitions applied for any capabilities or replications associated with the holes are independently of the fact that they are effectively available to perform a transition (at this point).
212 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
!1 !2
!1
in m
n[ ] | m[] a
out m
Figure 6. A Labelled n[!1 in m.!2 out m] | m[ ]
m[n[ ]] b
Transition
!2
System
for
6.3.4. II) From Processes Without Ambients to Petri Nets. In what follows we show how to build a Petri net from a labelled process without ambients. We denote by E(E) the set of all multisets which can be built with elements from the set E. We recall that a Petri net is given by a 5-tuple (P, Pi , T , P re, P ost), where • P is a finite set of places; • P ⊆ Pi is a set of initial places; • T is a finite set of transitions; • P re, P ost : T → E(P) are mappings from transitions to multisets of places. We say that an ambient-free process is rooted if it is of the form capw n.A′ or of the form !w A′ . We define the Petri net P NA′ associated with a rooted process A′ as follows: the places of P NA′ are precisely the rooted subprocesses of A′ , and A′ itself is the unique initial place; the transitions are ′ defined as the set of all capabilities inw n, outw n and replications !w occurring in A′ . Finally, P re and P ost are defined for all transitions as follows: • P re(capw n) = {capw n} and P ost(capw n) = ∅ if capw n is a place in P NA′ . • P re(capw n) = {capw n.(A1 | . . . | Ak )} and P ost(capw n) = {A1 | . . . | Ak } if capw n.(A1 | . . . | Ak ) is a place in P NA′ (A1 | . . . | Ak being rooted processes). • P re(!w ) = P re(!w ) = {!w A′ }, P ost(!w ) = {!w A′ , A′ } and P ost(!w ) = {!w A′ } if !w A′ is a place in P NA′ . For !1 in m.!2 out m, we obtain the Petri net given in Figure 7. 1
!1 1
1
1
1
! in m.!2out m in m.!2out m
in m
1
1
!2 ! 2out m
1
1
out m
1
out m
Figure 7. A Petri Net for !1 in m.!2 out m We denote by P Nw the Petri net P N (θLA,B (w )), that is, the Petri net corresponding to the rooted ambient-free process associated with w by
6. MOBILE MEMBRANES AND MOBILE AMBIENTS
213
θLA,B . In what follows we show how to combine the transition system LA,B and the Petri nets P Nw into one single Petri net. 6.3.5. III) Combining the Transition Systems and Petri Nets. We first turn the labelled transition system LA,B into a Petri net P NL = (PL , PLi , TL , P reL , P ostL ) where • PL is a set of states of LA,B ; • PLi is a singleton set containing the state corresponding to the ambient context CA of A; • TL is the set of transitions of the form (s, l, s′ ), with – s and s′ states from LA,B , – a transition l from s to s′ in LA,B ; • P re(t) = s and P ost(t) = {s′ } for all transitions t = (s, l, s′ ). i We define a Petri net P NA,B = (PA,B , PA,B , TA,B , P reA,B , P ostA,B ) by
• places (initial places) of P NA,B are the union of places (initial places) of P NL and of each of the Petri nets P Nw (for w occurring in one of the states of LA,B ); • transitions of P NA,B are precisely the transitions of P NL ; • the mappings P reA,B and P ostA,B are defined for all transitions t = (a, f, b) as: (i) P reA,B (t) = {a} and P ostA,B (t) = {b} if f does not occur as a transition in any P Nw (for w occurring in one of the states of LA,B ), (ii) if f is a transition of P Nw , then P reA,B (t) = {a}∪P rew (f ) and P ostA,B (t) = {b} ∪ P ostw (f ), where P rew and P ostw are the mappings P re and P ost of P Nw ), respectively.
6.3.6. Deciding Reachability. We recall that for a Petri net P N = (P, P i , T , P re, P ost), a marking m is a multiset from E(P ). A transition t is enabled by a marking m if P re(t) ⊆ m. Executing an enabled transition t for a marking m gives a marking m′ defined as m′ = (m \ P re(t)) ∪ P ost(t) (where \ stands for the multiset difference). A marking m′ is reachable from m if there exists a sequence m0 , . . . , mk of markings such that m0 = m, mk = m′ and for each mi , mi+1 , there exists an enabled transition for mi whose execution gives mi+1 . Theorem 3.74 ([74]). For all Petri nets P , for all markings m, m′ of P , one can decide whether m′ is reachable from m. For the reachability problem A ⇒∗ B over ambients, we consider the i . In FigPetri net P NA,B and the initial marking mA defined as mA = PA,B 1 2 ure 8 is depicted the initial marking for process n[! in m.! out m] | m[ ] as a combination of the labelled transition system of Figure 6 and the Petri net of Figure 7. It should be noticed that for any marking m reachable from mA , m contains exactly one occurrence of a place from PL . Roughly speaking, to any reachable marking corresponds exactly one ambient context. Moreover,
214 3. MOBILE MEMBRANES AND LINKS TO AMBIENTS AND BRANE CALCULI
the execution of one transition in the Petri net P NA,B simulates a reduction from ⇒amb . 1
a, ! ,a 1
1
1
1 1
1
1
a
1
a, !2,a
1 1
1
b, out m, a 1
2
in m.!2 out m
! in m.! out m
1
1
1
b, !1, b
1 2
! out m
out m
1 1
1
1
1
1
a, in m, b
1
1
1
1
1 1
b, !2, b
b
Figure 8. Petri Net n[!1 in m.!2 out m] | m[ ]
for
the
Labelled
Process
We define now the set MB of markings of P NA,B corresponding to B. Intuitively, a marking m belongs to MB if m contains exactly one occurrence C of a place from PL (that is, representing some ambient context) and in the context C, the holes can be replaced with processes without ambients to obtain B. Each of the processes without replication must correspond to a marking of the sub-Petri net associated with the hole it fills up. MB is defined as the set of markings m for P NA,B satisfying: (i) there exists exactly one ambient context Cm in m; (ii) σm (Cm ) and B are label-free-equivalent, for any substitution σm from holes w occurring in Cm to processes without ambients defined as σm (m ) = P1 | . . . | Pk for {P1 , . . . , Pk } the multiset corresponding to the restriction of m to the places of P Nw ; (iii) for all holes w occurring in a state of the transition system LA,B but not in Cm , the restriction of m to places of P Nw is precisely the set of initial places of P Nw . We adapt to our restricted fragment the results presented in [10]. Proposition 3.75 ([AC4]). For a Petri net P NA,B , there are only finitely many markings corresponding to a process B, and the set MB can be computed. The translation correctness is ensured by the following result. Then by using Proposition 3.76 and Theorem 3.74, we can decide whether an ambient A can be reduced to an ambient B. Proposition 3.76 ([AC4]). For all processes A, B we have that A ⇒amb B if and only if there exists a marking from MB such that mB is reachable from mA in P NA,B . Theorem 3.77 ([AC4]). For two arbitrary ambients A and B from our restricted fragment, it is decidable whether A reduces to B.
Bibliography [AC1] B. Aman, G. Ciobanu. Translating Mobile Ambients into P Systems. Electronic Notes in Theoretical Computer Science vol.171, 11–23, 2007. [AC2] B. Aman, G. Ciobanu. Timers and Proximities for Mobile Ambients. Lecture Notes in Computer Science vol.4649, 33–43, 2007. [AC3] B. Aman, G. Ciobanu. Mobile Ambients with Timers and Types. Lecture Notes in Computer Science vol.4711, 50–63, 2007. [AC4] B. Aman, G. Ciobanu. On the Reachability Problem in P Systems with Mobile Membranes. Lecture Notes in Computer Science vol.4860, 113–123, 2007. [AC5] B. Aman, G. Ciobanu. Structural Properties and Observability in Membrane Systems. Proceedings SYNASC’07, IEEE Computing Society, 74–81, 2007. [AC6] B. Aman, G. Ciobanu. On the Relationship Between Membranes and Ambients. BioSystems vol.91, 515–530, 2008. [AC7] B. Aman, G. Ciobanu. Describing the Immune System Using Enhanced Mobile Membranes. Electronic Notes in Theoretical Computer Science vol.194, 5–18, 2008. [AC8] B. Aman, G. Ciobanu. Membrane Systems with Surface Objects. Proceedings of the Workshop on Computing with Biomolecules (CBM), 17–29, 2008. [AC9] B. Aman, G. Ciobanu. Timed Mobile Ambients for Network Protocols. Lecture Notes in Computer Science vol.5048, 234–250, 2008. [AC10] B. Aman, G. Ciobanu. Resource Competition and Synchronization in Membranes. Proceedings SYNASC’08, IEEE Computing Society, 145–151, 2009. [AC11] B. Aman, G. Ciobanu. Simple, Enhanced and Mutual Mobile Membranes. Transactions on Computational Systems Biology XI, LNBI vol.5750, 26–44, 2009. [AC12] B. Aman, G. Ciobanu. Turing Completeness Using Three Mobile Membranes. Lecture Notes in Computer Science vol.5715, 42–55, 2009. [AC13] B. Aman, G. Ciobanu. Mobile Membranes with Timers. Electronic Proceedings in Theoretical Computer Science vol.6, 1–15, 2009. [AC14] B. Aman, G. Ciobanu. Typed Membrane Systems. Lecture Notes in Computer Science vol.?, 2010. [CK1] G. Ciobanu, S.N. Krishna. On the Computational Power of Enhanced Mobile Membranes. Lecture Notes in Computer Science vol.5028, 326–335, 2008. [CK2] G. Ciobanu, S.N. Krishna. Enhanced Mobile Membranes: Computability Result. Theory of Computing System, to appear 2010.
215
CHAPTER 4
Multiset Information Theory and Encodings Starting from Shannon theory of information, we present an information theory over multisets in which information is encoded and transmitted as multisets. We define the entropy rate of a multiset information source and derive a formula for the information content of a multiset. Then we study the encoder and channel part of the system, obtaining some results about multiset encoding length and channel capacity. The attempt to study information sources which produce multisets instead of strings and ways to encode information on multisets rather than strings originates in observing new computational models like membrane systems which employ multisets. Membrane systems have been studied extensively and there exist several results regarding their computing power, language hierarchies and complexity. However, while any researcher working with membrane systems (called also P systems) would agree that P systems are processing information, and that living cells and organisms do this too, we are unaware of any attempt to precisely describe natural ways to encode information on multisets or to study sources of information which produce multisets instead of strings. One could argue that, while some of the information in a living organism is encoded in a sequential manner, like in DNA for example, there might be important molecular information sources which involve multisets (of molecules) in a non-trivial way. 1. Multiset Information Theory We start with a simple question: given a P system with one membrane and, say, 2 objects a and 3 objects b from a known vocabulary V (suppose there are no evolution rules), how much information is present in such a system? Moreover, many examples of P systems perform various computational tasks; these systems encode the input (usually numbers) in various ways, either by superimposing a string-like structure on the membrane system, or by using the natural encoding of unary numeral system, that is, the natural number n is represented with n objects, for example, an . However, just imagine a gland which uses the bloodstream to send molecules to some tissue which, in turn, sends back some other molecules. There is an energy and information exchange. How can we describe it? Related questions are: what are the natural ways to encode numbers (information) on multisets, and how to measure the encoded information? 217
218
4. MULTISET INFORMATION THEORY AND ENCODINGS
If membrane systems, living cells and any other (abstract or concrete) multiset processing machines are understood as information processing machines, then we believe that such questions should be investigated. We start from the idea that a study of multiset information theory might produce useful results at least in systems biology; if we understand the natural ways to encode information on multisets, there is a chance that Nature might be using similar mechanisms. Another way in which this investigation seems interesting to us is that there is more challenge in efficiently encoding information on multisets (till now they constitute a poorer encoding media compared to strings). Encoding information on strings or even richer, more organized and complex structures is obviously possible and have been studied. Removing the symbol order or their position in the representation as strings can lead to multisets carrying a certain penalty, which deserves a precise description. Order or position do not represent essential aspects for information encoding; symbol multiplicity, a native quality of multisets, is enough for many purposes. We focus mainly on such “natural” approaches to information encoding over multisets, and present some advantages they have over approaches which superimpose a string structure on the multiset. Then we encode information using multisets in a similar way as it is done using strings. 1.1. Entropy Rate of an Information Source. Shannon’s information theory represents one of the great intellectual achievements of the twentieth century. Information theory has had an important and significant influence on probability theory and ergodic theory, and Shannon’s mathematics is a considerable and profound contribution to pure mathematics. Shannon’s important contribution comes from the invention of the source - encoder - channel - decoder - destination model, and from the elegant and general solution of the fundamental problems which he was able to pose in terms of this model. Shannon has provided significant demonstration of the power of coding with delay in a communication system, the separation of the source and channel coding problems, and he has established the fundamental natural limits on communication. As time goes on, the information theoretic concepts introduced by Shannon become more relevant to the increasingly complex process of communication. We use the notions defined in the classical paper [101] where Shannon has formulated a general model of a communication system which is tractable to a mathematical treatment. Consider an information source modelled by a discrete Markov process. For each possible state i of the source there is a set of probabilities pi (j) associated to the transitions to state j. Each state transition produces a symbol corresponding to the destination state, e.g. if there is a transition from state i to state j, the symbol xj is produced. Each symbol xi has an initial probability pi=1..n corresponding to the transition probability from the initial state to each state i.
1. MULTISET INFORMATION THEORY
219
We can also viewthis as a random variable X with xi as events with x1 x2 · · · xn . probabilities pi , X = p1 p2 · · · pn There is an entropy Hi for each state. The entropy rate of the source is defined as the average of these Hi weighted in accordance with the probability Pi of occurrence of the states: (12)
H(X) =
X i
Pi Hi = −
X
Pi pi (j) log pi (j)
i,j
Suppose there are two symbols xi , xj and p(i, j) is the probability of the successive occurrence of xi and then xj . The entropy of the joint event is X H(i, j) = − p(i, j) log p(i, j) i,j
The probability of symbol xj to appear after the symbol xi is the conditional probability pi (j).
Remark 4.1. The quantity H(X) is a reasonable measure of choice or information. String Entropy. Consider an information source X which produces sequences of symbols selected from a set of n independent symbols xi with probabilities pi . The entropy formula for such a source is given in [101]: H(X) =
n X i=1
pi logb
1 pi
1.2. Multiset Entropy. We consider a discrete information source which produces multiset messages (as opposed to string messages). A message is a multiset of symbols. The entropy rate of such a source is proved to be zero in [103]: 1 H({mi }ni=1 ) = 0 n Information content. Following [73], the information content of an outcome x is 1 (13) h(x) = log P (x) H(Xmultiset ) = lim
n→∞
where P (x) is the probability of the multiset x. x1 x2 . . . xn Let be k ∈ N and X = a random variable, and p1 p2 . . . pn Pn 1 m2 n a multiset over symbols from X with . . . xm x = xm n 1 x2 i=1 mi = k. The the outcome x is given by the multinomial distribution probability of Qn k mi i=1 pi : m1 , m2 , . . . , mn
220
4. MULTISET INFORMATION THEORY AND ENCODINGS
a3 a a b
b c a
Figure 1. An Example of a P System Pn n mi )! Y mi i=1 P [x = (m1 , m2 , . . . , mn )] = Qn pi i=1 mi ! (
i=1
So, the information content of the multiset x is:
! Pn n Y ( m )! 1 i mn 1 m2 i = log 1/ Qni=1 h(x = xm pm = 1 x2 . . . xn ) = log i P [x] m ! i i=1 i=1 Qn ! i=1 m P Qi = log i ( ni=1 mi )! ni=1 pm i
Remark 4.2. The results and procedures presented here refer mainly to deterministic P systems. A deterministic P system has the entropy rate converging to zero, and the information content of a unique configuration converging to zero. Example 4.3. As an example, we consider a P system described by Figure 1. Essentially, a P system is a multiset of objects and a set of rules. By applying the rules, we can generate all the possible configurations (multisets of objects), and their probabilities of being generated at each step of the execution. Using all the possible configurations, the information content is computed for each configuration, and then represented in Figure 2. Only the information content of c3 goes closer to 0; this means that the probability of this configuration goes closer to 1. Therefore c3 has the highest probability of being the final configuration of the system. The entropy rate is computed for these configurations at each step of the evolution, and this is represented in Figure 3. The entropy converges to 0, meaning that the system is deterministic and in time a configuration will appear with a probability converging to 1. Looking to Figure 2, we can identify c3 as the (only possible) final result of the evolution. 1.3. Multiset Encoding and Channel Capacity. After exploring the characteristics of a multiset generating information source, we move to the channel part of the communication system. Properties of previously
1. MULTISET INFORMATION THEORY
221
30
’b3’ ’c3’ ’a1c2’ ’a2c1’ ’b1c2’ ’b2c1’
25
20
15
10
5
0 0
5
10
15
Figure 2. Information Content 1.8
’entropy’
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0 0
10
20
30
Figure 3. Entropy Rate
40
50
222
4. MULTISET INFORMATION THEORY AND ENCODINGS
developed multiset encodings are analyzed in [BCI1, BCI2]. The capacity of multiset communication channel is derived based on Shannon’s definition and on the capacity theorem. We can have a multiset information source, and a usual sequence-based encoder and channel. All the following combinations are possible: Source/Encoder Sequential Multiset Sequential [101] this approach Multiset this approach this approach 1.4. String Encoding. We shortly review the results concerning the string encoding. Encoding Length. We have a set of symbols X to be encoded, and an alphabet A. We consider the uniform encoding. Considering the length l of the encoding, then X = {xi = a1 a2 . . . al |aj ∈ A}. 1 , then we have If pi = P (xi ) = n n X 1 H(X) = logb (n) = logb (n) ≤ l n i=1
It follows that n ≤ l = ⌈x0 ⌉ = ⌈logb n⌉. Channel Capacity.
bl .
For n ∈ N, n − bx = 0 implies x0 = logb n and so
Definition 4.4. [101] The capacity C of a discrete channel is given by log N (T ) T where N (T ) is the number of allowed signals of duration T . C =
lim
T →∞
(s)
Theorem 4.5. [101] Let bij be the duration of the sth symbol which is allowable in state i and leads to state j. Then the channel capacity C is equal to log W , where W is the largest real root of the determinant equation: X (s) −bij W − δ ij = 0 s
and where δij = 1 if i = j, and zero otherwise.
1.5. Multiset Encoding. We present some results related to the multiset encoding. Encoding Length. We consider a set X of N symbols, an alphabet A, and the length of encoding l: X = {xi = an1 1 an2 2 . . . anb b |
b X j=1
nj = l, aj ∈ A, i = 1..N }
1. MULTISET INFORMATION THEORY
223
Proposition 4.6. Non-uniform encodings of X over multisets are shorter than uniform encodings of X over multisets. Proof. Over multisets we consider both uniform and non-uniform encodings in [BCI2]. (1) For an uniform encoding (where all the encoding representations b have the same length l) we have N ≤ N (b, l) = = l b−1 Y (l + i) b+l−1 (b + l − 1)! i=1 = = . If x0 is the real root l!(b − 1)! (b − 1)! l b−1 Y (x + i) of N −
i=1
(b − 1)!
= 0, then l = ⌈x0 ⌉.
b+1 (2) For an non-uniform encoding, N ≤ N (b + 1, l − 1) = = l−1 b−1 b−1 Y Y (l + i) (l + i) b+l−1 (b + l − 1)! i=0 i=1 l = = = = b! b (b − 1)! (l − 1)!b! l−1 l N (b, l). b b−1 Y (x + i) Let x′0 be the real root of N −
i=0
(b − 1)!
= 0. Then l′ = ⌈x′0 ⌉.
x′0 N (b, x′0 ) = 0 we get N (b, x0 ) = b x′0 N (b, x′0 ). In order to prove l > l′ ⇐⇒ x0 > x′0 , let suppose that x0 ≤ b′ x0 . We have x′0 > b (for sufficiently large numbers), and this implies that x′ N (b, x0 ) ≤ N (b, x′0 ) < 0 N (b, x′0 ). Since this is false, it follows that x0 > x′0 b implies l ≥ l′ . From N − N (b, x0 ) = 0 and N −
Channel Capacity. We consider that a sequence of multisets is transmitted along the channel. The capacity of such a channel is computed for base 4, then some properties of it for any base are presented. Multiset channel capacity in base 4. In Figure 4 we have a graph G(V, E) with 4 vertices V = {S1 , S2 , S3 , S4 } and E = {(i, j) | i, j = 1..4, i ≤ j} ∪ {(i, j) | i = 4, j = 1..3} (a ) Using the notation of Theorem 4.5, we have bij k = tk because we consider that the duration to produce ak is the same for each (i, j) ∈ E. The determinant equation is
224
4. MULTISET INFORMATION THEORY AND ENCODINGS
Figure 4. Multiset Channel Capacity
W −t1 − 1 W −t4 W −t3 W −t2 −t −t W −t4 W 3 0 W 2 −1 0 0 W −t3 − 1 W −t4 0 0 0 W −t4 − 1
= 0
If we consider tk = t, then the equation becomes 1 4 = 0, and Wreal = 1. Therefore C = log4 1 = 0. 1− t W Multiset channel capacity in base b.
Theorem 4.7. The multiset channel capacity is zero, i.e., C = 0. Proof. First approach The first method for computing the capacity is using the definition from [101]. log N (T ) log N (b, T ) = lim = T →∞ T →∞ T T b log T 1 (b + T − 1)! = lim log = lim T →∞ T T →∞ T T !(b − 1)!
C =
lim
Using Stirling’s approximation log n! ≈ n log n − n we obtain
2. DATA COMPRESSION ON MULTISETS. SUBMULTISET-FREE CODES
225
1 (log(b + T − 1)! − log T ! − log(b − 1)!) = T →∞ T 1 = lim ((b + T − 1) log(b + T − 1) − T log T − (b − 1) log(b − 1)) = T →∞ T T b−1 b−1 log 1 + + lim log 1 + − = lim T →∞ T →∞ T b−1 T (b − 1) log(b − 1) =0 − lim T →∞ T Second approach Using Theorem 4.5, the determinant equation for a multiset encoder is:
C =
lim
W −t1 − 1 ··· W −tb W −t3 W −t2 −t −t 3 2 ··· W −tb −1 W 0 W −t 3 −1 ··· W −tb 0 0 W .. .. .. .. .. . . . . . −tb−1 − 1 −tb 0 · · · 0 W W −t 0 0 0 ··· W b −1
= 0
Remark 4.8. If tk = t, then the determinant equation becomes
(14)
1 1− t W
b
= 0.
The capacity C is given by C = logb W , where W is the largest real root of the equation (14). Considering x = W −t , then we have (15)
1 W =√ t x
1 ⇒ C = − logb x. t
Since we need the largest real root W , then we should find the smallest positive root x of the equation (1 − x)b = 0 which is x = 1, and so C = 0. Based on Shannon’s classical work, we derive a formula for the information content of a multiset. Using the definition and the determinant capacity formula, we compute the multiset channel capacity. We also explore the properties of multiset-based communication systems, and compare these to similar results for string-based communication systems. 2. Data Compression on Multisets. Submultiset-Free Codes Regular data compression converts a sequence of symbols into a sequence of strings (descriptions), assigning the shortest descriptions to the most frequent symbols produced by the data source.
226
4. MULTISET INFORMATION THEORY AND ENCODINGS
By contrast, here we consider the compression of a sequence of symbols to a sequence of multisets. This is relevant if the communication channel does not preserve order (positioning) of the transmitted letters, as is the case in many biological contexts; for example, if you regard the passage of a multiset of molecules through a blood vessel as an information transfer. We introduce the concept of submultiset-free codes, a counterpart to the existing concept of prefix-free codes. We derive a theorem about the structure of optimal submultiset codes for uniform random variables and we describe an algorithm for constructing them. 2.1. Submultiset Multicodes. Consider the classical communication system consisting of a source, an encoder, a communication channel, a decoder and a receiver. Here we present some of the consequences of using a channel that does not preserve the order of the letters sent through it — a multiset channel. This is the case in a lot of biological phenomena: the flow of (multisets of) cells and molecules through a blood vessel, the flow of molecules through the various areas of the cell interior etc. For classical, order-preserving channels, there are a lot of results concerning channel capacity and data compression. We aim to show that a lot of these results have counterparts in the case of multiset or orderless channels, and we examine some techniques for data compression in this case. Specifically, we consider the case in which the encoder converts the sequence of symbols received from the source into a sequence of multisets, each multiset representing a certain symbol. These pass through the (noiseless for now) multiset channel and reach the decoder, which converts them back to symbols and forwards them to the receiver. The mapping of symbols to multisets constitutes a source multiset code, also called a multicode or simply code beyond this point, with the multisets being its codewords (multicodewords). An almost immediate constraint imposed on a multicode is for it to be submultiset-free or instantaneous, meaning that no multiset in the code is a submultiset of another, and, as a consequence, there is no uncertainty in decoding. The notion of submultiset-free multicode, or submultiset code for short when not ambiguous, is proposed as a natural counterpart of the notion of prefix-free or prefix code, well-studied in the fields of Information Theory and Communication and Data Compression. After properly establishing the theoretical context with a few definitions (counterparts to others in Data Compression), we examine the available techniques for data compression in this context. We present several results regarding the structure of optimal submultiset codes. Then, by (a loose) analogy with Huffman coding, we describe an algorithm for constructing optimal submultiset codes for uniform random variables. 2.2. Theoretical Aspects. We give some definitions from [31], chapter 5 (Data Compression) and we accompany them with the corresponding definitions for the multiset case:
2. DATA COMPRESSION ON MULTISETS. SUBMULTISET-FREE CODES
227
Definition 4.9. [31]A source code C for a random variable X is a mapping from X , the range of X, to D∗ , the set of finite-length strings of symbols from a D-ary alphabet. Let C(x) denote the codeword corresponding to x and let l(x) denote the length of C(x). Definition 4.10. A source multiset code (multicode for short) M C for a random variable X is a mapping from X , the range of X, to D∗ , the set of finite multisets of symbols from a D-ary alphabet. Let M C(x) denote the codeword corresponding to x with cardinality l(x) = |M C(x)|. Cardinality can also be referred to as length in this context. Definition 4.11. [31]The expected length L(C) of a source code C(x) for a random variable X with probability mass function p(x) is given by: X L(C) = p(x)l(x), x∈X
where l(x) is the length of the codeword associated with x. Without loss of generality, we can assume that the D-ary alphabet is D = {0, 1, . . . , D − 1}. Definition 4.12. The expected length L(M C) of a source multiset code M C(x) for a random variable X with probability mass function p(x) is given by: X L(M C) = p(x)l(x) x∈X
Note that in both cases L is the expected value of l(X), L = E[l(X)], l(X) being the length of the codewords in the case of source codes and the cardinality of the multicodewords in the case of multicodes. Definition 4.13. [31]A code is called a prefix code, prefix-free code or instantaneous code if no codeword is a prefix of any other codeword. Definition 4.14. A code is called a submultiset code, submultiset-free code, [inclusion code, inclusion-free code] or instantaneous code if no codeword is a submultiset of any other codeword. Definition 4.15. A multicodeword is a codeword of a multicode. In the following: let X be a random variable with range X , we denote the prefix-free code with C and the submultiset-free code with M C. Let l∗ (x) denote the length of the optimal codewords M C ∗ (x), submultiset-free D where x ∈ X . We use the notation for combinations over multisets or k D multiset coefficient. Note that generates the triangular, tetrahedral k and pentatope numbers for k = 2, 3, 4 respectively [30]. Please note that D∗ means two different things depending on context: the set of all finite strings and the set of all finite multisets over the alphabet D.
228
4. MULTISET INFORMATION THEORY AND ENCODINGS
2.3. Prefix-Free vs. Submultiset-Free. Let c ∈ D∗ a codeword from a code C. We denote prefD∗ (c) the prefix set of string c over D∗ . From Definition 4.13 we know that a code C is prefix-free if and only if for any ci 6= cj codewords from C we have that ci ∈ / prefD∗ (cj ) and cj ∈ / prefD∗ (ci ). Proposition 4.16. Any submultiset-free code is also a prefix-free code.
Proof. Proof by contradiction: Suppose there exists a submultiset-free code M C that is not prefix-free. This means that ∃ci 6= cj ∈ M C s.t. ci ∈ prefD∗ (cj ) =⇒ cj = ci u, u ∈ D∗ =⇒ ci ⊂ cj (as multisets), contradiction. So M C is prefix-free.
Proposition 4.17. Not every prefix-free code is a submultiset-free code.
Proof. Consider the prefix-free code C = {0, 10, 11}. C is not submultiset-free, since 0 ⊂ 10. 2.4. Optimal Submultiset Codes. Here we present two examples to justify the upcoming results. Inside the tables, the Huffman codewords are left-aligned to suggest their prefix-freedom. Regarding notation, a multiset, as an equivalence class over strings can be represented by an exponent string of the whole class with a chosen order on the types. For example, the multiset {02 , 1, 23 } can be written as 001222, considering the order 0 < 1 < 2. The binary case (D = 2) . We build upon example 5.1.1 from [31], adding the details regarding multiset codes. Example 4.18. Let X be a random variable with the following distribution, codeword and multicodeword assignment: Huffman
Outcome
Probability
1
1/2
0
000
2
1/4
10
001
3
1/8
110
011
4
1/8
111
111
codeword
Multicodeword
The code and multicode are chosen independently: C is a binary Huffman code, and M C is a binary multicode for X. Note that M C can be obtained from C by interpreting the codewords in C as multiset descriptions and then increasing the multiplicity of 0 for each multicodeword until all have the same cardinality as the initially longest ones. The entropy H(X) = 1.75 bits, the expected length L(C) = 1.75 and the expected length L(M C) = 3 bits. Note that the multicode is quite inefficient in this case, since its expected length is significantly larger than the entropy. Unfortunately, this is the case with all binary codes, as illustrated by the following theorem:
2. DATA COMPRESSION ON MULTISETS. SUBMULTISET-FREE CODES
229
Theorem 4.19. The most efficient binary submultiset-free code (D = 2) has an expected length L(M C ∗ ) = |X | − 1. Lemma 4.20. The most efficient binary submultiset-free code for a given X is the set of binary multisets of cardinality |X | − 1. set of binary multisets of cardinality |X | − 1 has Proof.The 2 2 + |X | − 1 − 1 = = |X | elements, it is obviously |X | − 1 1 submultiset-free and has an expected length X X L(M C ∗ ) = p(x)l(x) = l(x0 ) p(x) x∈X
x∈X
= l(x0 ) = |X | − 1.
The ternary case (D = 3) . For a base D, or a D-ary alphabet with D > 2, this result can be significantly improved, as is suggested by the following example. Example 4.21. Let X be a random variable with the following distribution, codeword and multicodeword assignment: Huffman
Outcome
Probability
1
1/2
0
2
2
1/4
1
00
3
1/8
20
01
4
1/8
21
11
codeword
Multicodeword
In this case, the expected length for the Huffman code C is L(C) = 1.25 and L(M C) = 1.5 bits. We compute the minimum expected length of a submultiset-free code in base D = 3, for an uniform random variable. First compute the cardinality c of each multiset in the non-positional uniform encoding of n = |X |. As presented in [BCI4], we solve the equation: 3 (16) n− = 0 c √ (c + 1)(c + 2) −3 ± 8n + 1 therefore, n − = 0 ⇒ c1,2 = . 2 2 √ −3 + 8n + 1 , the ceiling of the positive root of the Let c(n) = 2 equation. There are at least n codewords with length c(n), so the expected length is at most X X L(M C) = p(x)c(n) = c(n) p(x) = c(n). x∈X
x∈X
230
4. MULTISET INFORMATION THEORY AND ENCODINGS 14 L(MC) in base 3 H(X)=log(n) H(X)+1 12
10
8
6
4
2
0 0
10
20
30
40 50 60 n - the number of symbols
70
80
90
100
Figure 5. Expected Length for Strings vs. Multisets The remaining question is whether a lower expected length can be obtained. Theorem 4.22. For base 3, the optimal submultiset-free code contains codewords with ( length c(n) − 1, 1 ≤ i ≤ z3 (n) l∗ (xi ) = c(n), z3 (n) < i ≤ n where * + 3 0, ∃k ≥ 1 s.t. n = k z3 (n) = c(n)(c(n) + 3) − n, otherwise. 2 3 The expected length when ∄k ≥ 1 s.t. n = is k Pz3 (n) 1 P Pz3 (n) 1 1 L(M C ∗ ) = (c(n) − 1) + ni=z3 (n) c(n) = c(n) − i=1 = i=1 n n n c(n)(c(n)+3) −n c(n)(c(n) + 3) z3 (n) 2 = c(n) − = c(n) + 1 − . c(n) − n n 2n Therefore, * + 3 c(n), ∃k ≥ 1, s.t. n = ∗ k L(M C ) = 2 2n + (2n − 3)c(n) − c (n) , otherwise. 2n In Figure 5 we present a plot of L(M C ∗ ) and L(C ∗ ). Note that L(C ∗ ) is not represented itself but by its bounds: H(X) ≤ L(C ∗ ) < H(X) + 1 (Theorem 5.4.1 from [31]), where H is the entropy rate.. For uniform X, H(X) = log(n).
2. DATA COMPRESSION ON MULTISETS. SUBMULTISET-FREE CODES
231
Note that the expected length of the optimal submultiset code is greater than the expected length of the corresponding Huffman code. Results for the general case (arbitrary D) . Given a discrete uniform random variable X with range X (let n = |X |) and a noiseless D-ary multiset channel accepting symbols from D = {0, 1, . . . , D − 1} we present the following results: Lemma 4.23. For a discrete uniform random variable X, there is no codeword in the optimal submultiset-free code M C ∗ with the length greater than the ceiling of the positive solution of the equation: (17)
n−
D c
= 0.
Proof. Let ⌈c′ ⌉ = c(n), where c′ is the positive solution of the equation ′ 17; we proved in [BCI5] that c is unique. D Note that ≥ n. In the worst case, we have equality and therec(n) fore M C ∗ ⊆ Dc(n) , the set of all multisets of length c(n) in base D. Lemma 4.24. Given a discrete uniform random variable X, the length of at least one codeword in the optimal submultiset-free code is the ceiling of the positive solution of equation 17. Proof. Let c′ be the positive solution of equation 17. By [BCI5] c′ is unique. ∗ ′⌉ − 1 = Suppose ∀xi ∈ X s.t.l∗ (xi ) < ⌈c′ ⌉ =⇒ ∀xi∈ X s.t. l (xi ) ≤ ⌈c D D D ′ ⌈c − 1⌉ =⇒ |X | ≤ < =⇒ |X | = n < ⌈c′ − 1⌉ c′ c′ contradiction. Therefore we proved that ∃xi ∈ X .l∗ (xi ) ≥ ⌈c′ ⌉, and from Lemma 4.23 we obtain the proof of this lemma. Lemma 4.25. For every finite multiset mk of cardinality k over a finite D alphabet D there exist supermultisets mi ⊃ mk for all i > k. i−k ∀mk ∈ Dk .∀i > k.∃S = {mi |mi ∈ Di .mk ⊂ mi }, |S| = Formally, D . i−k Proof. Every multiset mi that includes mk can be constructed as the union of mk with a multiset of cardinality i − k. Therefore, the number of D multisets mi ⊃ mk is the number of multisets mi−k : . i−k Lemma 4.26. In any base D > 1, for a uniform random variable X, if D |X | = with k ≥ 1 =⇒ zD = 0. k
232
n.
4. MULTISET INFORMATION THEORY AND ENCODINGS
Proof. Let n = |X | =
D k
with k ≥ 1. We know that |M C(X )| ≥
From equation (17) we obtain that we can represent n symbols with D multisets, therefore c′ = k . k Suppose that zD > 0. This means that there is at least one codeword with cardinality less then k. Without loss of generality, assume that this codeword has cardinality k − 1. From Lemma 4.25 this implies that to keep the code D submultiset-free, we obtain that = D codewords of length k are no 1 longer usable. Since D > 1, if we choose a single codeword with cardinality D k − 1 the number of available codewords is |M C(X )| = −D+1 = c′ n − D + 1 < n, contradiction. Corollary 4.27. For a discrete uniform random variable X all the codewords of a binary multicode have the same length, z2 = 0. 2 D Proof. ∀X.∃k ∈ N+ .|X | = k +1 = = , and the corollary k k follows from Lemma 4.26. Lemma 4.28. Let c′ be the positive solution of the equation 17. For a discrete uniform random variable X with range X , n = |X |, in the optimal submultiset-free code ∄xi ∈ X s.t. l∗ (xi ) < ⌈c′ ⌉ − 1. Let us consider the Kruskal-Macaulay function λt N described in [62], and c(n) as the ceiling of the positive solution of the equation 17. We proved that the optimal submultiset codes contain at least one codeword with length c(n) in Lemma 4.24, no codewords with length less than c(n) − 1 in Lemma 4.28, and 0 codewords with length c(n) − 1 for the particular case when n D is in Lemma 4.26. k These facts suggest the following result: Theorem 4.29. For a uniform random variable X (with|X | = n) the optimal submultiset-free code contains codewords with length ( c(n) − 1, 1 ≤ i ≤ zD (n) l∗ (xi ) = c(n), zD (n) < i ≤ n D where zD (n) = λD−2 −n . c(n) Based on the previous theorem, we describe the following algorithm for generating optimal submultiset codes for uniform variables: The idea of the algorithm is that, considering the uniform variable X with range of cardinality n, first we compute c(n), and all the codes will have either length c(n) − 1 or c(n).
2. DATA COMPRESSION ON MULTISETS. SUBMULTISET-FREE CODES
233
Algorithm 1 MUcoding(n, D) Require: n the number of elements in the range X of an uniform random variable X, and D the encoding base Ensure: optimal submultiset-free code M C ∗ (x), where x ∈ X 1: c(n) ⇐ the ceiling positive « solution of the equation 17 „fi of the fl D 2: zD (n) ⇐ λD−2 − n , where λt N is the Kruskal-Macaulay function c(n) 3: aux ⇐ Decode(0c(n)−1 ) 4: for i = 1 to zD (n) do 5: M C ∗ (xi ) ⇐ Encode(aux + i − 1, D) 6: end for 7: aux ⇐ Decode((D − 1)c(n) ) 8: for i = n downto zD (n) do 9: M C ∗ (xi ) ⇐ Encode(aux − (n − i), D) 10: end for
Then compute zD (n), the number of codes of length c(n) − 1, as given by Theorem 4.29. The following table presents some values of λt N . N =
0
1
2
3
4
5
6
7
8
9
10
11
λ1 N =
0
0
1
3
6
10
15
21
28
36
45
55
λ2 N =
0
0
0
1
1
2
4
4
5
7
10
10
λ3 N =
0
0
0
0
1
1
1
2
2
3
5
5
λ4 N =
0
0
0
0
0
1
1
1
1
2
2
2
λ5 N =
0
0
0
0
0
0
1
1
1
1
1
2
Table 1. Kruskal-Macaulay Function Values
Then we select as codewords the first zD (n) multisets of length c(n) − 1 and the last n − zD (n) multisets of length c(n) in lexicographical order. Subroutines Encode() and Decode() were published in [BCI5]. 2.5. Related Work. The problem of source coding using multisets was also treated in [103]. We introduced the notion of submultiset-free code, or submultiset code for short, similar to the notion of prefix-free code for strings but used to refer to instantaneous multiset codes. We described an algorithm to construct an optimal submultiset code for uniform (equiprobable) random variables. We propose that the notion of instantaneous code should not longer be regarded as identical to the notion of prefix code, but instead prefix codes should be regarded as instantaneous codes for strings and submultiset codes regarded as instantaneous codes for multisets. Directly related future work would be to describe communication protocols for orderless channels. An idea is that communication be done on two time-scales: one being the shorter interval that is usually required for a multiset to travel through the channel, and the other one a larger interval that
234
4. MULTISET INFORMATION THEORY AND ENCODINGS
guarantees that all injected objects have passed through and the channel is ready for sending another multiset. Some other future developments include: an algorithm for generating optimal submultiset codes for arbitrary discrete random variables, comparative analysis with Huffman and Fano codes, studying the possibility of using multisets for channel coding, noisy channels, error-correcting codes for this case. 3. Number Encodings and Arithmetics over Multisets There is a connection between the information theory over multisets and the theory of numeral systems expressed by multisets. The study of number encodings using multisets can be seen as a study of a class of purely non-positional numeral systems. Here we revise some previously defined non-positional number encodings using multisets and their associated arithmetic operations, describing a general encoding/decoding algorithm that can map natural numbers to their multiset representations, and vice versa. We present the templates for the most compact encodings in base b for successor and predecessor operations. P systems are the abstract machines of membrane computing. For each abstract machine, the theory of programming introduces and studies various paradigms of computation. For instance, Turing machines and register machines are mainly related to imperative programming, while λ-calculus is related to functional programming. Looking at the membrane systems from the point of view of programming theory, we intend to provide useful results for future definitions and implementations of P system-based programming languages, that is, programming languages that generate P systems as an executable form. The authors of such languages will most probably have to face the problem of number encoding using multisets, since the multiset is the support structure of P systems. We attempt to show that the problem of number encoding using multisets is an interesting and complex one, with many possible approaches for various purposes. We outline a few of these approaches, and detail the most compact encoding using one membrane, also comparing it to the most compact encoding using strings. Such a comparison offers some hints about information encoding in general, specifically how do we most compactly encode information over structures that have underlying order (strings), or just multiplicity (multisets). Our approach to encode numbers does not attempt to superimpose the string structure over the multiset, but to use only the already present quality of multiset elements (multiplicity) to encode numbers and, by extension, information. Thus, this approach might be easier to use in related biochemical experiments. Another advantage is that using this approach it is possible to represent arbitrarily large numbers without membrane creation, division, or dissolution and without infinitely many membranes or object types. One disadvantage is that the arithmetic operations have
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
235
slightly higher complexity. We have implemented the arithmetic operations [BCI0], and each example is tested with our web-based simulator available at http://psystems.ieat.ro/. 3.1. Motivation. The development of P systems needs to be supported by efficient means of information encoding over multisets. As a first step, we consider the case of number encodings over multisets, similarly to how Church numerals are constructed in λ − calculus. Arguably, on any stringbased abstract machine, numbers are most easily encoded by assigning each digit to a position in the string, that is, using positional number systems in an appropriate base, dependent on the alphabet of the machine. By contrast, on multiset-based machines (as P systems are), the positioning, while perfectly implementable, is not free, meaning we pay a certain price for it. So far, the most widely used number encoding over multisets is the natural encoding, meaning the natural number n is encoded using n objects of the same kind, e.g., by an . We envision these main types of number encodings over multisets (without string objects): (1) Positional encodings These are obtained by superimposing a string structure over the multiset in various ways and representing the number using a positional number system. • The digits of the number in a certain base can be naturallyencoded in successive membranes; the next digit is encoded in an inner (or outer) membrane. The shortcoming of this type of encodings lies in that, to represent arbitrarily large natural numbers, we need a membrane structure with at most countable depth. • The digits of the number are encoded using different objects for each digit, e.g., if the number is n = 3 · 103 + 4 · 102 + 2 · 10 + 5, we could represent it as a3 b4 c2 d5 . By convention, the a objects represent the first digit, the b objects the second one and so on. We are unaware of work in this area, however this type of encodings are easily obtained from the previous type, by flattening the membrane structure in a natural way. As a consequence, whereas in the above type we need infinitely many membranes to represent arbitrary natural numbers, in this case we need at most countably many object types. (2) Non-positional encodings In this case, the idea is not to superimpose a string structure over the multiset, so we do not have positioning. Instead, we rely only on the multiplicity of objects, a defining, native characteristic of the multiset. Below we consider several such number encodings. We also show that the natural encoding already mentioned is actually a first member in a family of such encodings. The main disadvantage of this approach is a greater encoding length, but this can be much
236
4. MULTISET INFORMATION THEORY AND ENCODINGS
improved over the natural encoding in other variants of this type. However the encoding length and the complexity of arithmetic operations over numbers encoded in this manner are generally greater than in positional encodings. The significant advantage of nonpositional encodings is the fact that they use a single membrane and finitely many object types. This can be very desirable for practical reasons. (3) Hybrid encodings Hybrid encodings derived from the above types can be considered, and may be practically useful, but are not investigated here. It is important to note the fact that there is a qualitative difference between positional and non-positional encodings. The difference lies in that, in the case of positional encodings there is at least one additional unbounded characteristic of the membrane system with respect to the non-positional ones. Using non-positional encodings, we can represent arbitrarily large numbers with a finite number of object types, given that the multiset elements have unbounded multiplicity. This is an important qualitative difference between positional and non-positional encodings, which may also be of great practical interest.
3.2. Most Compact Encoding Using One Membrane. The natural encoding is easy to understand and work with, but it has the disadvantage that for very large numbers the P system membranes will contain a very large number of objects, which is undesirable for practical reasons. First we analyze the most compact encoding (M CE) using two object types (binary case M CE2 ), and then the ternary case M CE3 .
3.3. Cantor Encoding or Most Compact Encoding Base 2 (CE2 or M CE2 ). We call the binary case of the most compact encoding using one membrane the Cantor encoding, because it is the inverse of the Cantor pairing function. The Cantor pairing function is the bijection π : N × N → N defined by: 1 π(k1 , k2 ) = (k1 + k2 )(k1 + k2 + 1) + k2 2 There is also a variant called Hopcroft-Ullman function, and together they are the only quadratic functions with real coefficients that are bijections from N × N to N. These functions are used (by Cantor) in the proof of |Q| = |N| = κ0 . For more details see [104]. Here we present the original introduction of this encoding though, knowing that this encoding is the inverse of the Cantor pairing function, a much easier formulation would be possible.
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
Decimal 0 1 2 3 4 5 6 7
M CE1∗ λ = a0 a1 a2 a3 a4 a5 a6 a7
M CE2 λ 0 1 00 01 11 000 001
M CE3 λ 0 1 2 00 01 02 11
M CE4 λ 0 1 2 3 00 01 02
.. .
.. .
.. .
.. .
.. .
19 20 21 22 23 24 25 26 27 28 29 30
a19 a20 a21 a22 a23 a24 a25 a26 a27 a28 a29 a30
01111 11111 000000 000001 000011 000111 001111 011111 111111 0000000 0000001 0000011
222 0000 0001 0002 0011 0012 0022 0111 0112 0122 0222 1111
011 012 013 022 023 033 111 112 113 122 123 133
237
Table 2. Most Compact Encodings To minimize the number of objects in the representation we encode natural numbers in the way depicted in Table 1. For base 2 we use two object types to illustrate this encoding, thus obtaining a binary encoding over multisets (unordered binary encoding). We derive the encoding and decoding procedures as follows: Definition 4.30.Function P R : P(C) → R+ ∪ ∅, x, x ∈ M ∩ R+ , |M ∩ R+ | = 1 P R(M ) = undef ined, otherwise x represents the (only) positive real in M . To encode the natural number n, we first have to determine the number m of objects needed to represent it, and then the object types. The length of the representation in base 2 is obtained by solving the equation:
(18)
x(x + 1) − n = 0, 2
238
4. MULTISET INFORMATION THEORY AND ENCODINGS
and m = [P R({x1 , x2 })] = [
−1 +
√
8n + 1
2
]. From (18) we have k = n −
m(m + 1) objects of type 1, the rest being 0. 2 We decode the number encoded using m objects with k objects of type m(m + 1) 1 as n = + k. 2
3.4. Most Compact Encoding Base b. We denote by N (b, m) the number of numbers encoded in base b with m objects. For our representations, the definition of r − combinations is useful in determining the number of objects represented with m objects in a multiset with b types. Our base is indicated by b. b b−1+m b−1+m (19) N (b, m) = = = m m b−1
which is also the number of m − combinations of a multiset of b types. The encoding and decoding algorithms for a certain base b is described in [BCI1] by the equation:
(20)
n−
x−1 X i=0
N (b, i) = 0 ⇔ n −
Qb−1
i=0 (x
b!
+ i)
= 0.
The integer part of the real positive root of equation (20) represents the number m = P R({xi |1 ≤ i ≤ b}), i.e. the number of objects needed to Qb−1 (x + i) represent the natural number n. Since the function f (x) = n − i=0 b! is strictly descending for x ∈ R+ , we know that it always has a single positive real solution, and so m is uniquely determined for each n, as the integer part of this solution. To determine the number of objects of each type in the representation (i.e., the multiplicity of each multiset element that occurs in it) we must solve the following system of equations, since we know n0 = n (the number to be encoded):
(21)
Qb−1 (x + i) n0 − i=0 = 0 ⇒ m0 = [P R({xi |1 ≤ i ≤ b})] b! m −1 P 0 n1 = n0 − i=0 N (b, i) Qb−2 (x + i) i=0 n1 − (b − 1)! = 0 ⇒ m1 = [P R({xi |1 ≤ i ≤ b − 1})] P 1 −1 n2 = n1 − m i=0 N (b − 1, i) . .. Q0 (x + i) = 0 ⇒ mb−1 = [P R({x1 })] nb−1 − i=0 b! m −1 P b−1 nb = nb−1 − i=0 N (1, i)
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
239
where we consider {0, 1, ..., b − 1} the digits of the base, and the multiset encoding of n will be 0m0 −m1 1m1 −m2 ...(b − 2)mb−2 −mb−1 (b − 1)mb−1 , where mi , 0 ≤ i ≤ b − 1 are obtained from (21). Q0 (x + i) Remark 4.31. From (21) we have that nb−1 − i=0 = 0 ⇔ b! nb−1 − x = 0 ⇒ mb−1 = nb−1 and nb = 0. Algorithm 2 Encode(n, b) Require: Two natural numbers n and b ≥ 1. Ensure: 0d0 1d1 . . . idi . . . (b − 1)db−1 , the multiset M CEb encoding of n. 1: 2: 3: 4: 5: 6: 7: 8: 9:
k⇐b repeat
Qk−1
(x + i) Solve n − i=0 = 0, X = {xj |j = 1, k} ⇐ roots of the k! equation mnext ⇐ [P R(X)] db−k ⇐ m − mnext m ⇐ mnext P n⇐n− m i=0 N (k, i) k ⇐k−1 until n = 0
To decode a number encoded in base b of length m as 0d0 1d1 ...(b−1)db−1 , we sum the equations: P 0 −1 n1 = n0 − m i=0 N (b, i), Pm 1 −1 n2 = n1 − i=0 N (b − 1, i), .. . Pmb−1 −1 N (1, i), nb = nb−1 − i=0 and we get: P 0 −1 Pm1 −1 Pmb−1 −1 n = n0 = m N (1, i) ⇔ i=0 N (b, i) + i=0 N (b − 1, i) + . . . + i=0 n = N (b + 1, m0 − 1) + N (b, m1 − 1) + . . . + N (2, mb−1 − 1) ⇔ P n = b+1 k=2 N (k, mb+1−k − 1) where, mb−1 = db−1 , mj = mj+1 + dj , 0 ≤ j ≤ b − 2 3.5. Membrane Systems for Most Compact Encodings. We present the P systems that implement the arithmetic operations on numbers encoded using the unary and binary case of the most compact encoding. All P systems presented below have corresponding XML input files used in the framework of WebPS, a Web-based P system simulator [BCI0]. The notations we use are standard in membrane computing [90], such that we do not recall any definition.
240
4. MULTISET INFORMATION THEORY AND ENCODINGS
Algorithm 3 Decode(0d0 1d1 . . . idi . . . (b − 1)db−1 )
Require: d0 , d1 , . . . , di , . . . , db−1 the multiplicities of the multiset M CEb elements. Ensure: The natural number n. 1: 2: 3: 4: 5:
mb ⇐ 0 for k = b − 1 downto 0 do mk ⇐ mk+1 + dk end for P n ⇐ b+1 k=2 N (k, mb+1−k − 1)
Successor M CE2 – Figure 6 Time complexity: O(1). P system evolution and complexity proof: The successor of a number in this encoding is computed in the following manner: either we have an object 0 and the rule 0s → 1 transforms one 0 into an 1 (1 time unit), or we have a number encoded using only objects 1 and the rule 1s → 00t0 transforms one object 1 in two objects 0 (increasing the length of encoding) and generates an object t0 (1 time unit) which promotes the rule 1 → 0|t0 . This rule transforms all other objects 1s into 0s (1 time unit: because of the maximally-parallel rewriting – MPR). Consequently, the time complexity of the successor is O(1) because the evolution ends in 1 or 2 time units.
Figure 6. Successor in M CE2 Predecessor M CE2 – Figure 7 Time complexity: O(1). P system evolution and complexity proof: The predecessor of a number is computed by rewriting an 1 into a 0 by the rule 1s → 0 whenever we have objects 1 (1 time unit); otherwise we consume one 0 and produce an object t0 by the rule 0s → t0 ( 1 time unit), and transform all the other objects 0 into 1 by rule 0 → 1|t0 promoted by t0 (1 time unit: because of the MPR). Consequently, the time complexity of the predecessor is O(1) because the evolution ends in 1 or 2 time units.
Figure 7. Predecessor in M CE2
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
241
Addition in M CE2 – Figure 8 Time complexity: O(n). P system evolution: We implement addition by coupling the predecessor and successor through a “communication token”. We use the idea that we add two natural numbers by incrementing a number while decrementing the other until we cannot decrement anymore. The evolution is started by the predecessor computation in the outer membrane which injects a communication token s into the inner membrane. For each predecessor cycle (except the first one) the inner membrane computes the successor passing back the token s. Since we want to stop the computation when the predecessor is reaching 0, we omit computing the successor for one predecessor cycle: the first token s is eaten-up by the single object p present in the inner membrane.
Figure 8. Addition in M CE2 Complexity proof: While computing n + m, number n is encoded in membrane 0 and number m is encoded in membrane 1. From the evolution we note that a number is incremented while the another is decremented until it cannot be decrement anymore, we count n decrements (n predecessor evolutions, each with O(1) time complexity) and the same number of increments (n successor evolutions, each with O(1) time complexity). Consequently, addition ends in 2n steps, with time complexity O(n). Multiplication M CE2 – Figure 9 Time complexity: O(n1 ·n2 ). P system evolution: We implement multiplication in a similar manner to addition, coupling a predecessor with an adder. The idea is to provide the first number to a predecessor, and perform the addition iteratively until the predecessor reaches 0. The predecessor is computed in membrane 0, and in membranes 1, bk, and 2, we have a modified adder. The evolution is started by the predecessor working over the first number, in the outer membrane 0. The predecessor activates the adder by passing a w that acts as a communication token. The adder is modified to use an extra backup membrane which always contains the second number, which we named bk (to suggest that it contains a backup of the second number). When the adder is triggered by the predecessor, it signals the backup membrane bk which supplies a fresh copy of the second number to the adder
242
4. MULTISET INFORMATION THEORY AND ENCODINGS
(bk fills membrane 1 with the encoding of the second number) and a new addition iteration is performed. At the end of the iteration, the adder sends out a token s to the predecessor in membrane 0. The procedure is repeated until the predecessor reaches 0.
Figure 9. Multiplier in M CE2 Complexity proof: If we compute n1 · n2 , we have the number n1 encoded in predecessor and the number n2 encoded in adder. The multiplier system evolves by performing n1 times the addition of n2 with the result memorized in the output membrane (this result start by 0). A predecessor (O(1)) decrease n1 until reaches 0 and for each decreasing an addition (O(n2 )) is performed. Consequently, the multiplication ends in n1 · O(1) · O(n2 ) = O(n1 · n2 ) time complexity. If n1 = n2 = n, O(n1 · n2 ) = O(n2 ).
Figure 10. Multiple-Iterations Successor Multiple-Iterations Successor M CE2 – Figure 10 p p Time complexity: O( ) = O( √ ). P system evolution and complexity m n proof: The multiple-iterations successor performs p successor iterations on the number n. The number p of iterations is the number of s objects. In this encoding the multiple-iterations successor is computed in the following manner. Considering the order of priority, the rule 0s → 1 is applied; it
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
243
consumes as many s as possible and objects 0 are transformed into objects 1(maximum m = the length of n); 1 time unit (because of MPR). Then, if objects s still exist, the rule su → 0t generates a single 0; 1 time unit, and generates a t which promotes the rule 1 → 0|t , transforming all objects 1 into objects 0; 1 time unit. Together with one 0 generated by the su → 0t rule, the number of objects in the encoding is increased. The last rule t → u converts the object t into an u which allows the second rule to consume a single s. If the objects s are not entirely consumed, then this process is repeated. The rule t → u is not important for time complexity because it can be applied in the same time with the previous one. In the first 3 time units m + 1 objects s are consumed, in the next 3 time units m + 2 objects s are consumed, and so on (m is increasing), until all the objects s are consumed. We compute all p iterations in 3k time units (where k is from p = k k X X p . Consequently, (m + 1) = k(m + 1)), further k ≤ (m + i) ≥ m+1 i=1 i=1 p p 3p ) = O( ). We obtained O( ) the time complexity is O(3k) = O( m+1 m m to compute p successor iterations with Multiple-iteration successor, better than simple successor which needs p · O(1) = O(p) to compute p successor iterations. Multiple-Iterations Predecessor M√CE2 – Figure 11 √ −1 + 8n + 1 Time complexity: O(m) = O( ) = O( n). P system evolu2 tion: The multiple-iterations predecessor performs p predecessor iterations on the number n. The number of iterations is the number of objects s. The multiple-iterations predecessor is computed in the following manner. Considering the order of priority, the rule 1s → 0 is applied, consuming as many s as possible, and objects 1 are transformed into objects 0 (maximum m objects 1); 1 time unit. If we still have objects s, the rule 0su → t removes a single 0 (the length of n is decreasing, m = m − 1; 1 time unit) and generate one t which promotes the rule 0 → 1|t which transforms all 0’s into 1’s; 1 time unit. The number of objects in the encoding is decreased by the rule 0su → t. The last rule t → u (it is not important for time complexity) converts the object t into an u which allows the second rule to consume a single s. If the objects s are not entirely consumed, then this process is repeated.
Figure 11. Multiple-Iterations Predecessor
244
4. MULTISET INFORMATION THEORY AND ENCODINGS
In the first 3 time units m + 1 objects s are consumed, in the next 3 time units m objects s are consumed, and so on (m is decreasing), until all the objects s are consumed. We compute all p iterations in 3k time k X (m − i)). If we consider, in the worst case, units (where k is from p = i=1
that m is decreasing until it reaches 0 (decoder), it is obtaining k = m and m X p= (m − i). Consequently, the time complexity is O(3k) = O(3m) = i=0
O(m). We obtained O(m) to compute p predecessor iterations with Multipleiteration predecessor, better than the simple predecessor which needs p × O(1) = O(p) to compute p predecessor iterations. Decoder M CE2 – Figure 12 √ Time complexity: O(m) = O( n). P system evolution: The decoder is an multiple-iterations predecessor that performs n predecessor iterations on the number n (the encoded number). Instead of consuming s objects it produces d objects. The number of d objects is n when the system stops.
Figure 12. Decoder M CE2 Complexity proof: Time complexity of Decoder M CE2 is the same as for the Multiple-iteration predecessor M CE2
Figure 13. Optimized Adder Optimized Adder √ M CE2 – Figure 13 Time complexity: O( n). P system evolution: The optimized adder contains in membrane 0 a multiple-iteration predecessor (Decoder), and in membrane 1 a multiple-iterations successor. Each membrane contains a term of the addition. As opposed to the simple adder where the predecessor and the successor perform a synchronization after each iteration, in this optimized adder the predecessor compute in one step multiple iterations, and sends multiple objects s to the successor. The successor performs its iterations in
3. NUMBER ENCODINGS AND ARITHMETICS OVER MULTISETS
245
an asynchronous manner (without any response to the predecessor). The evolution stops when the predecessor stops. Complexity proof: According to the P system evolution, we observe that the optimized adder contains a multi-iteration predecessor in one membrane and a multi-iterations successor in the other. Because the successor performs its iterations in an asynchronous manner without any response to the predecessor the time complexity is given by the worst time complexity between multi√ p iteration predecessor (O( n)) and multi-iterations successor (O( √ )). The n √ n p worst case is for p of order n and it is obtained O( √ ) = O( √ ) = O( n). n n√ Consequently, the time complexity of the optimized adder is O( n).
3.6. Most Compact Encodings Base b. 3.6.1. Successor In M CEb . We present a template for the successor P system in M CEb : Π1 = (V, µ, w0 , (R0 , ρ0 ), 0), V = {0, 1, 2, ..., b − 1, s}, µ = [0 ]0 , w0 = 0d0 1d1 2d2 ... (b − 1)db−1 s, R0 = {rX+2b : Xs → pX , r2b−1 : s → 0s0 , rX+b : (b − 1) → (X + 1) |pX , rb−1 : (b − 1) → 0|s0 , rX : pX → (X + 1), s0 → λ, with 0 ≤ X ≤ b − 2 } ρ0 = {rk+2b > rk+2b−1 | 0 ≤ k ≤ b − 2} The successor of a number encoded in base b is computed in the following manner: • If there is at least one object X (where 0 ≤ X ≤ b − 2), based on ρ0 , the rule Xs → pX rewrites the maximal X into an auxiliary object pX .All objects b−1 in the encoding are rewritten into object (X + 1) by the rule (b − 1) → (X + 1) |pX and the auxiliary object pX is transformed into one (X + 1) by the rule pX → (X + 1) • else, the rule s → 0s0 generates one 0 and if there are objects (b−1), all of them are rewritten into 0 by the rule (b − 1) → 0|s0 . 3.6.2. Predecessor in M CEb . We present the template for the predecessor P system in M CEb . Π2 = (V, µ, w0 , (R0 , ρ0 ), 0), V = {0, 1, 2, ..., b − 1, s}, µ = [0 ]0 , w0 = 0d0 1d1 2d2 ... (b − 1)db−1 s,
246
4. MULTISET INFORMATION THEORY AND ENCODINGS
R0 = {rX+2b : Xs → pX , r2b : 0s → s0 , r2b−1 : 0 → (b − 1) |s0 , rX+b−1 : X → (b − 1) |pX , rX : pX → (X − 1), s0 → λ , with 1 ≤ X ≤ b − 1 } ρ0 = {rk+2b > rk+2b−1 | 1 ≤ k ≤ b − 1} The predecessor of a number encoded in base b is computed in the following manner: • If there is at least one object X (where 1 ≤ X ≤ b − 1), based on ρ0 , the rule Xs → pX rewrites the maximal X into an auxiliary object pX .All other objects X are rewritten into object (b − 1) by the rule X → (b − 1) |pX and the auxiliary object pX is transformed into one (X − 1) by the rule pX → (X − 1) • else, if there are objects 0, then the rule 0s → s0 erases one 0 and all of the other 0 are rewritten into (b − 1), by the rule 0 → (b − 1)|s0 . √ The most compact encodings over multisets represent n using b n objects, where b is the base of the encoding. When we consider strings instead of multisets, the positioning can provide for additional storage space, so the most compact encoding of n over strings has a length of logb n. This fact provides some hints about information encoding in general, allowing to compare the most compactly encoded information over structures as simple sets, multisets, and strings of elements from a multiset (where position is relevant). We study the effect of considering position as relevant over √ the elements of a multiset is the reduction of the encoding length from b n to logb n (also detailed in [BCI1]). Still, the encodings over multisets are much closer to the computational models inspired by biology, and can help improve their computational efficiency. Algorithms for non-positional base b encoding and decoding of natural numbers are presented. They are similar in form and idea to the base conversion algorithms for positional number systems. Also, the P system templates for the computation of the successor and predecessor of a natural number represented in non-positional base b are defined. They can be instantiated to create a successor or predecessor P system for an arbitrary non-positional base. 4. Arithmetic Expressions in Membrane Systems Using promoters and a biological inspired mechanism of communication, namely the antiport rules, we present how to build larger systems by composing some arithmetic modules. The way of composing membrane systems is inspired by the developments in the field of asynchronous processor design. We consider the most general class of asynchronous systems, namely the delay-insensitive systems. We outline the assembly of delay-insensitive
4. ARITHMETIC EXPRESSIONS IN MEMBRANE SYSTEMS
247
complex systems from simple functional components. Larger systems can be easily assembled into delay-insensitive systems if we allow antiport rules and promoters. The resulting systems is asynchronous and efficient. Delay-insensitive systems are true asynchronous systems. Since in practical electronic designs they incur a severe efficiency penalty, quasi-delayinsensitive systems are usually preferred. However, given the abstract nature of systems, we consider the delay-insensitive compositional membrane systems, because they have the highest degree of asynchrony and, as a consequence, also the highest modularity. This means that some components may take a longer or much shorter time to process an input (even the same input), and still the end result is the same. In other words, the system reacts consistently to the input. The power of delay-insensitive systems lies in the fact that once a component performs a certain operation correctly, it can be plugged in the system if the input and output are operated and interpreted in the required way. This allows the usage of a black-box development model in order to obtain a clean, modular design of the system. We consider the delay-insensitive membrane systems as an important class of compositional asynchronous membrane systems. A handshake mechanism is an exchange of signals between two devices when communications begin, in order to ensure synchronization which precedes the information transfer. The delay-insensitive systems incur a significant overhead, because the system must ensure that no input enters a component until the possibly pending processing in the component is finished. This can be resolved by adding a handshake mechanism between successive components to ensure that each component obeys the predefined sequence of states. We use the antiport rule having the form (a, out; b, in), which means that an object a from the current membrane and an object b from the outer membrane can cross the delimiting membrane at the same time. A promoter is a special object which by its presence makes it possible to use a rule; e.g. a → b |p means that the rule a → b can be used in the presence of a promoter p. By using antiport rules we can realize the synchronization between successive components through the exchange of signal promoters which further facilitate the data and control transfer. In the following we present an algorithm implementing the data and control transfer between successive components.
4.1. Composition by Antiport Handshakes. Starting from membranes able to do certain arithmetic operation, we can build a more complex system able to evaluate arithmetic expressions by aggregating these systems by introducing each membrane inside the previous one. Membrane i computes the result of a function fi : Mi → fi (Mi ), where Mi represents a multiset, and i = 0, n − 1. A compositional functional membrane system as described before is valid if Img(fi ) ⊆ Dom(fi+1 ), for i = 0, n − 2. The resulting system in Figure 14 computes the result of fn−1 ◦ . . . ◦ f0 applied to
248
4. MULTISET INFORMATION THEORY AND ENCODINGS
the input represented by the multiset in membrane 0 (the input membrane), and the results are obtained in membrane n − 1 (the output membrane). Let us consider the membrane i, where i = 0, n − 1. It must follow the evolution steps given by the following algorithm. The delay-insensitive algorithm: (1) an antiport handshake ensures that membrane (i − 1) has finished the transmission; (2) membrane i performs its computation phase; (3) an antiport handshake ensures that membrane (i + 1) is ready to receive, and membrane i is ready to transmit; (4) membrane i transmits its output as input to membrane (i + 1); (5) membrane i goes back to ready state. 0 (Input) 1 i i+1 n−1 (Output)
Figure 14. A Compositional Membrane System If i = 0, then step 1 is missing; if i = n − 1, then steps 3 and 4 are missing because i is the last membrane on the path (the output membrane). When i = 1, n − 2 and we do not work with the first or the last membrane, the data and control transfer proceeds as described before. The order of the processes in the membranes is revealed by reading them from top to bottom. Following the algorithm, after the membrane i gets its input from the previous one, it performs its partial computation and is ready to communicate with membrane i + 1, as soon as this membrane is ready for communication. After the antiport handshake between i and i + 1 is completed, i starts transmitting its output as input to i+1, and after completion it enters the ready state. Control is now transferred to membrane i + 1, and here the computation is performed now. Then a handshaking with the next membrane is attempted. The communication pattern is the following: • Syni F inEnd → Syni+1 , which fires only after the computation in membrane i is finished; it creates an object Syni+1 which serves as a connection request to membrane i + 1;
4. ARITHMETIC EXPRESSIONS IN MEMBRANE SYSTEMS
249
• antiport handshake: (Readyi+1 , out; Syni+1 , in) in the inner membrane; • data transfer, represented by the (possibly multi-step) evolution of the multiset DAT Ai into the multiset DAT Ai inside membrane i + 1 and an object End in the current membrane: DAT Ai ⇒∗ (DAT Ai , ini+1 )End|Readyi+1 . By this notation we understand that each rule which facilitates the transition is promoted by the object Readyi+1 . We note that an object End must be generated only after all data transfers are done, marking the end of this phase. • end of transmission is realized by Readyi+1 End → (F in, ini+1 )Readyi , a rule in the outer membrane which removes the promoter Readyi+1 and the object End (which signals the end of data transmission), and sends the signal-promoter F in to the inner membrane as termination of the data transfer step. This rule also resets membrane i to the ready state. The input membrane 0 has no antiport handshake, since the input is stored directly into it, leaving only the computation and transmission phases. The rule IN P U T ⇒∗ DAT A0 |F in means that the input multiset evolves, in 0 or more steps, to an intermediate result of the computation. This result is then sent to membrane 1 via the transmission rules. We can note that a promoter F in should be present in the membrane at the start of its functioning, being supplied at the end of the input insertion. The last membrane n − 1 lacks a transmission phase. Now we provide details on how to use such a compositional system. The user of the asynchronous protocol can design such an asynchronous chain of membranes by fulfilling the following requirements: (1) The user must provide the goal-specific rule sets for computation and transmission. These rules must not use the control objects Syn, Ready, End, F in with two exceptions which are described at 3. The rules are also confined to use objects from, and create objects in the embedding membrane. (2) The user must ensure that the function computed by an asynchronous P system receives the proper input from the previous component, and produces the proper output for the next component. (3) The user must ensure that the computation and transmission phases produce an End object upon termination. Without the first requirement the system is not asynchronous, since the computation could be limited to a single step (or a fixed number of steps), while without the second one, the transmission to the next component could be limited to a single step, so the delay-insensitive system becomes only a speed-insensitive system.
250
4. MULTISET INFORMATION THEORY AND ENCODINGS
We have used the above asynchronous protocol to compose some previously studied P arithmetic systems to create composite systems able to compute arithmetic expressions. Please note that using this method of composition, there are no restrictions on how a certain component performs its computation, as long as it produces the correct result and obeys the above rules for asynchronous coupling with the rest of the system. The component systems can be synchronous or asynchronous, they can be slower or faster, in the sense of steps required to compute the result; they can even belong to different P system families. The whole system remains asynchronous in the sense that each component waits for its neighbour to finish the computation before passing another input to it. 4.2. Computing Arithmetic Expressions. As a case study we developed the membrane systems for computing the arithmetical expression n · (m + p). We have two different implementations based on two multiplication systems, with and without promoters. The two variants for multiplication have different time complexity, the one with promoters being faster.
(a) Multiply with promoters
(b) Multiply w/o promoters
Figure 15. Multiplication Systems We build the compositional asynchronous system for computing n · (m + p) starting from its reverse polish notation ·n + mp. To compute this expression we must compose an addition system with a multiplication system. The results (the objects y represent the value of sum m + p) from the adder are sent to the multiplier as an input. After the computation phase, the transmission phase is started. We ensure that the computation in the inner membrane does not start until the transmission phase is finished. If the input is received in the multiplier, in the computation phase, the product n · y will be computed and the value of the expression n · (m + p) is obtained.
In Figure 16 the product n · y is computed with a multiplication system with promoters of complexity O(n). In Figure 17 this product is computed with a multiplication system without promoters of complexity O(n · m). The results of both systems are the same. The difference between the two systems is given by the computation rules of membrane 1. We can observe that if a component is replaced by another one which implements the same operation with a different complexity, the compositional system produces the same result, because it is a delay-insensitive system.
4. ARITHMETIC EXPRESSIONS IN MEMBRANE SYSTEMS
251
Figure 16. Using Multiplication with Promoters
Figure 17. Using Multiplication without Promoters In [BCI3] it is proved that a compositional systems constructed by using the delay-insensitive algorithm is also delay-insensitive systems.
252
4. MULTISET INFORMATION THEORY AND ENCODINGS
5. Various Encodings of Multisets It is useful to study multisets properties. Here we compare multisets with several ingredients used in natural computing (membrane and DNA computing), namely strings, lists, and vectors over positive integers. In this process some known algebraic operations are used, and we introduce the Hadamard product over multisets (and vectors). This product can be used to decide if two multisets have disjoint supports. The interpretation of the Hadamard product is clarified when we introduce the G¨odel encoding of multisets, leading to an unexpected result: ”two multisets have disjoint alphabets if and only if their corresponding G¨ odel numbers are relatively prime”. An additional reason to use this product is that the Hadamard product is perfectly described when we consider the product of diagonal matrices over natural numbers as an encoding for multisets. Diagonal matrices also helps to think of integer quadratic forms. Since such matrices can be obtained in certain conditions in the theory of modules of finite type over principal ideal domains (like the ring of integers), we restrict our approach and do not consider multisets with negative multiplicities. However the integer quadratic forms lead to the result that every natural number can be obtained as the weight (or length) of a squared multiset over a four letter alphabet, and that a four letter alphabet is the minimal one (in cardinality) to get this for all natural numbers. 5.1. On Multisets, Strings, Lists and Parikh Vectors. There are many papers dedicated to multisets which contain various viewpoints and interpretations. The purpose of this approach is to put together the main ingredients used in membrane computing, and to create the connections between them. Let Σ 6= ∅ be an alphabet, usually finite. The free monoid Σ∗ is the set of all words (strings) on the alphabet Σ, together with the concatenation operation and the empty word denoted by ε. Since for every letter a in the alphabet Σ we can associate the word of length 1, we obtain an embedding of the alphabet Σ = {a1 , · · · , an } in Σ∗ , denoted by ι : Σ → Σ∗ ι(a) = a. The free monoid of words Σ∗ on the alphabet satisfies the so-called “universality property” which, briefly, can be described as follows: Theorem 4.32. Every function f from the alphabet Σ to any monoid M , can uniquely be extended to a morphism of monoids from Σ∗ to M . ι
⊂
- Σ∗
-
f
∃! f
∗
⊂
Σ
M
This property allows us, in order to describe a morphism from Σ∗ to another monoid M , to define only a correspondence from Σ (which is finite)
5. VARIOUS ENCODINGS OF MULTISETS
253
to M . The elements of Σ∗ are written w = a1 a2 · · · an , where ai ∈ Σ. A language over Σ is, simply, any subset of Σ∗ . • We have to remark that, for a word w = a1 a2 · · · an , the order and the multiplicity of appearance are important. • Another remark is that Σ∗ can be viewed as the set of all finite lists on Σ, and the concatenation of words corresponds to the operation of “list concatenation” over the set of lists denoted by List(Σ). List concatenation is denoted by “++” as in the functional programming language Haskell. Σ∗ List(Σ) w = a1 a2 · · · an [a1 , a2 , · · · , an ] “·′′ ++ ε [] • If Ω is another alphabet with |Σ| = |Ω| (i.e. they have the same cardinal), then Σ∗ ∼ = Ω∗ (are isomorphic monoids). The concept of multiset (or bag) was introduced in order to capture the idea of multiplicity of appearance, or resource; there are a lot of notations and definitions for this concept. We deal here only with multiplicities from N, the set of non-negative integers. Consider, as above, a finite alphabet Σ = {a1 , · · · , an }, and N hΣi, the set of all mappings from Σ to N (it is denoted also by NΣ ). Let α, β : Σ → N be multisets. The additive structure of N induces an additive operation on multisets, defined as: ′′ +′′
: N hΣi × N hΣi → N hΣi (α, β) → α + β
and α + β : Σ → N is defined, pointwise, by (α + β)(a) = α(a) + β(a). It is a simple exercise to obtain that (N hΣi , +) is a commutative monoid, the identity being the empty multiset θ : Σ → N, θ(a) = 0 for all a ∈ Σ. We can enrich the structure of N hΣi by a scalar multiplication with scalars from N: ′′ ·′′
: N × N hΣi → N hΣi (n, α) → n · α
where n · α : Σ → N is defined, pointwise, by (n · α)(a) = n · α(a) for all a ∈ Σ. A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse. A semimodule over a semiring is defined in a similar way to a module over a ring [45]. Proposition 4.33. (N hΣi , +) is a semimodule over the semiring N. Every letter (object) of the alphabet Σ can be viewed as a multiset. If a ∈ Σ we consider the multiset a ˜ : Σ → N defined by 1 for b = a a ˜(b) = 0 for b ∈ Σ \ {a}
254
4. MULTISET INFORMATION THEORY AND ENCODINGS
Identifying a letter from the alphabet with its associated multiset by j : Σ → N hΣi , j(a) = a ˜, the sum and the scalar product are used to write every multiset α ∈ N hΣi as α=
X
a∈Σ
α(a) · a
Since Σ is finite, the sum is finite. In fact, the set of multisets {a | a ∈ Σ} is a base for the N-semimodule N hΣi. N hΣi satisfies, as Σ∗ , a “universal property”, but restricted to commutative monoids: Theorem 4.34. Each function f from the alphabet Σ to any commutative monoid C can be uniquely extended to a morphism of commutative monoids from N hΣi to C. ι
⊂
- NhΣi +
⊂
Σ
∃! f
f
-
C As done for words, we can compare multisets with vectors of natural numbers. It is well known that, for n ∈ N, n 6= 0, Nn is a Nsemimodule with respect to addition of vectors and multiplication with scalar from N. Moreover, Nn is free with respect to the canonical base B = {ei = (0, · · · , 0, 1, 0, · · · , 0) | i = 1, ..., n}. If Σ = {a1 , · · · , an } then N hΣi ∼ = Nn as N-semimodules. Considering the one-to-one correspondence from Σ onto B defined by ai 7→ ei for i = 1, n, then we have the following table hΣi Nn PN n α = i=1 α(ai ) · ai (α(a1 ), · · · , α(an )) multiset addition vector addition scalar product scalar product θ (0, · · · , 0) ∼
We denote by ϕΣP : N hΣi → Nn the isomorphism, and ϕΣ ( ni=1 α(ai ) · ai ) = (α(a1 ), · · · , α(an )).
We are ready now to make connections between all the concepts. We make intensive use of the “universal property” of the free monoid Σ∗ since it can be connected by this property with any monoid and not only with commutative ones, as in the case of N hΣi. Forgetting about the N-semimodule structure of N hΣi, and keeping only the monoid one, we can consider now the following commutative diagram.
5. VARIOUS ENCODINGS OF MULTISETS
Σ
⊂
ι
255
- Σ∗
∩
? NhΣi
ψΣ
j∗
j
ϕΣ
? - Nn
If w = b1 b2 · · · bm , bk ∈ Σ, k = 1, ..., m, m ∈ N, j ∗ (w) = j(b1 ) + j(b2 ) + · · · + j(bm ). For example, if Σ = {a, b}, w = abbaba, j ∗ (w) = j(a) + j(b) + j(b) + j(a) + j(b) + j(a) = a + b + b + a + b + a = 3a + 3b . If we denote of appearances of the symbol ai in Pn by |w|ai the number ∗ (w) is a surjective morphism and, by the w, j ∗ (w) = |w| · a , j ai i i=1 isomorphism theorem for monoids, we obtain the following theorem. Theorem 4.35. Σ∗ /Ker j ∗ ∼ = N hΣi.
Remark 4.36. Ker j ∗ is a congruence on Σ∗ . Two words w and w′ are in the same equivalence class with respect to Ker j ∗ if and only if j ∗ (w) = j ∗ (w′ ). This is equivalent with |w|ai = |w′ |ai , (∀)i = 1, n. P Moreover, for a multiset α = ni=1 α(ai ) · ai , its inverse image under j ∗ , namely j ∗−1 (α) = {w ∈ Σ∗ | |w|ai = α(ai ), i = 1, ..., n}, is the language consisting of words having the number of appearance of the letter ai equal with α(ai ). For each w ∈ Σ∗ , the Parikh image is given by the number of occurrences of each ai in w. The Parikh mapping ψΣ for a language over Σ∗ is given by the Parikh image applied to all the words in the language. Remark 4.37. ϕΣ ◦ j ∗ = ψΣ , where ψΣ is the Parikh mapping over Σ∗ . 5.2. Hadamard Product of Multisets. We present another operation which, as far as we know, has not yet been considered for multisets. The main structure described above is induced by the addition of natural numbers. Since (N, ·) is also a monoid, we can also define the Hadamard product of two multisets: ′′ ·′′
: N hΣi × N hΣi → N hΣi (α, β) → α · β
where (α · β)(a) = α(a) · β(a) Pn for all a ∈ Σ.Pn If we write α and β as i=1 β(ai ) · ai , α · β is written as i=1 α(ai ) · ai , Pn (α(a ) · β(a )) · a , i.e. we multiply the multiplicities. i i i i=1 • This operation has a certain computer science interpretation. Consider all the lists of length n over natural numbers. In this manner, α corresponds to the list α ˆ = [α(1), · · · , α(n)] ˆ and β to the list β = [β(1), · · · , β(n)]. α · β corresponds to the list
256
4. MULTISET INFORMATION THEORY AND ENCODINGS
ˆ where zipwith is a well-known function in the zipwith (·) α ˆ β, programming language Haskell [50]. • The Hadamard product has not been previously used in membrane computing; however there are several studies related to formal power series and matrices [48]. Proposition 4.38. The Hadamard product for multisets makes N hΣi a commutative monoid with zero (the empty multiset θ). Together with the other two operations defined previously, N hΣi becomes an N−algebra (i.e. a commutative semiring which is also a semimodule over a semiring). Proof. The proof is a simple exercise. Pn Interesting enough, the identity in N hΣi is exactly the multiset i=1 ai corresponding to Σ (viewed as a multiset). Remark 4.39. If Σ has at least two elements, the product has non-zero divisors of θ, since, for example, a1 · a2 = θ. Problem: It should be interesting to see what is the correspondence of the Hadamard product for words. A possible solution for this problem can be given by a suitable “encoding” of multisets. 5.3. G¨ odel Encoding of Multisets. Consider Pn the set of the first n prime numbers {p1 , p2 , · · · , pn }, and Pn the set of all natural numbers that have the prime decomposition from Pn , that is m ∈ Pn if and only if m = px1 1 ...pxnn , xi ∈ N, for all i = 1, ..., n. Proposition 4.40. Pn is an N-algebra. Proof. We describe only the operations which makes Pn to be an N−algebra; the verifications are simple exercises. : “Addition” denoted by ∔ is given by the multiplication of two natural numbers from Pn , i.e., if m = px1 1 ...pxnn and k = py11 ...pynn , then m ∔ k = mk = p1x1 +y1 ...pxnn +yn . : “Scalar multiplication by a positive integer l” denoted by • is given lxn 1 by the lth -power: if m = px1 1 ...pxnn , then l • m = ml = plx 1 ...pn . : “Hadamard multiplication” of two natural numbers from Pn , denoted by ⊙, is obtained by multiplication of the corresponding powers from their prime decomposition, i.e. if m = px1 1 ...pxnn and k = py11 ...pynn , then m ⊙ k = px1 1 y1 ...pxnn yn . : The identity for “addition” is p01 ...p0n = 1, and the identity for “Hadamard multiplication” is p1 ...pn . We can provide a G¨ odel-like encoding for multisets. Theorem 4.41. The N−algebras N hΣi and Pn are isomorphic for every finite alphabet Σ = {a1 , ..., an }.
5. VARIOUS ENCODINGS OF MULTISETS
257
Proof. We define the mapping ϕ : Σ → Pn given by ϕ(ai ) = pi , i = 1, n. By the property of universality for commutative monoids (Theorem 4.34), this mapping can be uniquely extended to a monoid homomorphism P ϕ+ : N hΣi → Pn . Moreover, if α = ni=1 α(ai ) · ai , then ϕ+ (α) = α = α(a ) α(a ) p1 1 · · · · · pn n ∈ Pn . A routine verification proves that ϕ+ is a bijective mapping and ϕ+ is also a homomorphism of N− algebras, i.e. ϕ+ (α + β) = ϕ+ (α) ∔ ϕ+ (β), ϕ+ (αβ) = ϕ+ (α) ⊙ ϕ+ (β) and ϕ+ (l · α) = l • ϕ+ (α). Remark 4.42. Using the previous result, every multiset can be viewed as a unique natural number. As we have already done in the proof, we denote by α the unique natural number associated with a multiset α. As a first consequence of this theorem we can characterize multisets with disjoint supports. As it is usually defined in membrane computing community, if α is a multiset, we denote by supp(α) the set of letters from the alphabet Σ with non-zero multiplicities in α: Theorem 4.43. Two multisets over the same alphabets have disjoint supports if and only if the corresponding G¨ odel numbers are coprime (or relatively prime). Proof. Let α, β ∈ N hΣi be two multisets such that supp(α)∩supp(β) = ∅. This means that the Hadamard product is zero. According to Theorem 4.41, α · β = θ ⇐⇒ α ⊙ β = θ¯ = p01 · · · · · p0n = 1. This is possible if and only if (α, β) = 1, i.e. they are coprime numbers. This nice characterization can be somewhat extended to strings, using their connection with multisets. 5.4. G¨ odel Encoding and Multisets Languages. A multiset language is a subset of N hΣi. Here we deal with multiset languages, their operations, and their correspondents in G¨ odel encoding. Definition 4.44. If A is a multiset language, we denote by A the image of A under ϕ+ . Thus A = {α|α ∈ A}, and we call it the set of G¨ odel numbers associated to A. To keep this approach self-contained, we include the definitions of other operations defined over multisets and their languages (see also [68, 90]). Definition 4.45. If α, β are multisets from N hΣi, we have Multiset-subtraction defined by (α − β)(a) = max(0, α(a) − β(a)), for all a ∈ Σ; Multiset-inclusion defined by α ⊆ β ⇐⇒ α(a) ≤ β(a), for all a ∈ Σ; Multiset-union defined by (α ∪ β)(a) = max(α(a), β(a)), for all a ∈ Σ; Multiset-intersection defined by (α ∩ β)(a) = min(α(a), β(a)), for all a ∈ Σ; If A and B are multiset languages, we define their sum by
258
4. MULTISET INFORMATION THEORY AND ENCODINGS
A + B = {α + β|α ∈ A, β ∈ B}; If A and B are multiset languages, we define their union by A ∪ B = {α ∪ β|α ∈ A, β ∈ B}; If A and B are multiset languages, we define their intersection by A ∩ B = {α ∩ β|α ∈ A, β ∈ B}. Notation: We denote by gcd(k, l) the greatest common divisor of k and l, by lcm(k, l) the least common multiple of k and l, and by k/l that k is a divisor of l. With respect to the G¨ odel encoding, we obtain the following result: Theorem 4.46. Let α, β ∈ N hΣi be two multisets over Σ, A, B ⊆ N hΣi two multiset languages and α, β, A, B their corresponding G¨ odel encodings. Then A + B = A B = αβ|α ∈ A, β ∈ B . α ⊆ β ⇐⇒ α/β. α ∪ β = lcm(α, β). α ∩ β = gcd(α, β). A ∪ B = lcm(α, β)|α ∈ A, β ∈ B . A ∩ B = gcd(α, β)|α ∈ A, β ∈ B .
Proof. This proof is essentially based on Theorem 4.41, and can be verified by easy calculations.
5.5. G¨ odel Encoding and Membrane Systems. We connect the G¨ odel encoding of multisets with membrane systems. The objects of a membrane are represented by a multiset which can be directly encoded by G¨odel numbers. The application of rules is related to the assignment of objects to rules, and this process depends on the left-hand sides of the rules. Each rule could be applied whenever the G¨ odel encoding of its left-hand side divides the G¨ odel encoding of the available resources. As soon as such an application is possible, the application of the rule means the consumption of the objects in the left-hand side followed by producing the new objects in the right-hand side. In terms of G¨ odel encoding, this means that the multiset of objects is divided by the encoding of the left-hand side of the rule, and then multiplied by the encoding of the right-hand side of the rule. We give a simple example on how a membrane system with promoters can be encoded. 5.5.1. Multiplication with Promoters. Figure 18 presents a P system Π1 with promoters for multiplication of n objects a by m objects b, the result being the number of objects d in membrane 0. The object a is a promoter in the rule b → (b + d)|a ; this means that the rule can only be applied in the presence of object a. The available m objects b are used in order to apply m times the rule b → (b + d)|a in parallel at each step. Based on the maximal parallelism and the availability of a objects, the rule (a + u) → u is applied once at each step and consumes an a. The procedure is repeated n times
5. VARIOUS ENCODINGS OF MULTISETS
259
until no object a is present within the membrane. At each step, while one a is consumed, m objects d are generated. Finally we get n × m objects d. Π1 = (V, µ, w0 , R0 , 0), V = {a, b, d, u}, µ = [0 ]0 , w0 = na + mb + u, R0 = {r1 : b → b + d|a , r2 : a + u → u}.
0
na+mb+ u b −> b+d | a a+u −> u Figure 18. Multiplication with Promoters The corresponding G¨ odel encoding is GΠ1 = (GV, µ, w0 , GR0 , 0), where GV = {2, 3, 5, 7}, µ = [0 ]0 and w0 = 2n 3m 7. GR0 is given by r1 : 3 → 15|2 , where |2 is read ”only if 2/w0 ” and r2 : 14 → 7. Generally speaking, r1 can be applied in parallel k times if and only if 3k /w0 as long as 2/w0 ”. We can apply in parallel both r1 and r2 k times and j times respectively, if and only if (3k 14j )/w0 . As result we obtain w0 · 15k · 7j . As an example of computation done with such an encoding, k j 3 14 consider w0 = 756 = 22 · 33 · 7. The computation is given by the following evolution: 378 · 53 756 · 153 · 7 = 378 · 53 → · 153 · 7 = 189 · 56 . 27 · 14 27 · 14 Although we have 3/189, the computation stops because 189 · 56 is not divisible by 2. A more detailed study of this approach could provide a new view of a membrane system as a G¨ odel encodings processor. The new aspect is related to the fact that a classic encoding gets a dynamic perspective, and some operations over membranes can be translated into operations over G¨odel numbers. Such a view deserves further investigations. 756 →
5.6. Norm of a Multiset. We find that some notions that are familiar in linear algebra were not addressed yet in the multiset case. The inspiration for considering them comes from another encoding of multisets. 5.7. Encoding Multisets with Diagonal Matrices. Another view of multisets can be given by diagonal matrices. One should ask: “Why should we consider an encoding with diagonal matrices? Encoding with vectors is not enough ?”. As we have seen already, we also consider the “Hadamard product” of multisets. This Hadamard product has no special meaning in the G¨ odel encoding, and looks quite unnatural. These things change if we consider matrices.
260
4. MULTISET INFORMATION THEORY AND ENCODINGS
As above, consider the finite alphabet Σ = {a1 , · · · , an } and its corresponding N-algebras of multisets N hΣi; we can view a multiset α = Pn i=1 α(ai ) · ai as the matrix Aα having αi on its diagonal in the (i, i) place. Denoting the set of all diagonal matrices with entries from N by Dn (N), we have the following Theorem 4.47. N hΣi and Dn (N) are isomorphic as N-algebras.
This time, the Hadamard product becomes more interesting: if α and β are two multisets, the corresponding matrix of αβ is Aαβ = Aα Aβ obtained by the usual multiplication of matrices which is identical, for diagonal matrices, with the Hadamard product of matrices (also known as the entrywise product or the Schur product). See also [48] for details. Moreover, if α = β we obtain the square of a multiset, with the corresponding matrix the square of Aα . The Parikh image is, in this case, the vector having the squares of multiplicities as components. Diagonal matrices lead us to think about quadratic forms, and vectors lead us to think about the Euclidean norm. In this context, it can be useful to consider the norm of a multiset: Pn Definition 4.48. The norm of apmultiset α = i=1 α(ai ) · ai is the Pn 2 (a ) (denoted by ||α||). α Euclidean norm of its Parikh image, i i=1 5.8. Integer Quadratic Forms. Considering diagonal matrices, we can think of quadratic forms. The historical considerations for integer quadratic forms can be found in [29]. It was Lagrange who started, in 1770, the theory of universal quadratic forms by proving the following result.
Theorem 4.49. (Four Squares Theorem). The quadratic form x2 + + z 2 + t2 is universal, i.e. for every positive integer n there exist integer numbers α1 , α2 , α3 , α4 such that n = α12 + α22 + α32 + α42 . y2
A proof for this theorem can be found in [52], page 280. Legendre gave in 1798 a deeper result called Three Squares Theorem which defines exactly which type of numbers require four squares (rather than three squares). His proof was incomplete, and it was Gauss who completed it. Theorem 4.50. (Three Squares Theorem). Every natural number which is not of the form 4k (8m + 7) can be written as a sum of three squares of integers. 5.9. Universality Results for the Multiset Norm. The last two theorems from the previous subsection can be used to prove a theorem concerning the Hadamard product of multisets and its weight. Pn Definition 4.51. The weight Pn (or length) of a multiset α = i=1 α(ai )·ai is the natural number |α| = i=1 α(ai ).
The same definition can be adapted for vectors from Nn or for matrices from Dn (N).
5. VARIOUS ENCODINGS OF MULTISETS
261
Theorem 4.52. Every natural number can be obtained as the weight of a squared multiset on a four letter alphabet (i.e., the square of the norm of a multiset). Moreover, a four letter alphabet is the minimal one (in cardinality) in order to obtain this. Proof. Let n ∈ N. According to the “Four Squares Theorem” there exist the natural numbers α1 , α2 , α3 , α4 such that n = α12 + α22 + α32 + α42 . Moreover, according to the “Three Squares Theorem”, number four is the minimum number of positive integers in order to obtain such a decomposition for all the natural numbers. Let us consider a four letter alphabet Σ = {a, b, c, d}, and α = α1 a+α2 b+ α3 c + α4 d the multiset defined on this alphabet and having the multiplicities given by α1 , α2 , α3 , and α4 , respectively. Using the Hadamard product, we can square α, and so α2 = α12 a + α22 b + α32 c + α42 d. It is easy to see that n = |α2 | = ||α||2 . Remark 4.53. Let us consider the function F : N hΣi → N given by F (α) = ||α||2 . By our last theorem, this function is surjective; using similar terminology with the quadratic forms, we call this function to be universal. • An obvious question arise: ”Is F injective ?”. A quick answer is no, since, for instance, 4 = F (a + b) = F (c + d). • The next question should be ”How many representations of a natural number as an image of a multiset under F do we have ?” or equivalently, ”How many multisets over a four-letter alphabet have the same norm?”. We can use a result of the number theory known as Jacobi’s Theorem (see [81], page 431), asserting that the number of ways to represent a positive integer n as the sum of four squares is 8 times the sum of the divisors of n if n is odd, and 24 times the sum of the odd divisors of n if n is even. Theorem 4.54. Let n be a positive integer. P The number of integral solutions (x, y, z, t) to x2 + y 2 + z 2 + t2 = n is 8 d/n d if n is odd, and P 24 d/m d if n is even of the form n = 2k m with m odd. Unfortunately, these nice formulas do not work in our multiset case, since in them there are counted solutions with negative integer components and we consider only multisets with positive multiplicities. We denote by r(n) the number of representations of the natural number n as the length of a squared multiset, i.e. the cardinal of F −1 (n).
Theorem 4.55. The number r(n) of representations of a natural number n as the length of a squared multiset satisfies the following recursive formula: P |F −1 (n)| = r(n) = 1/n 1≤u≤√n (5u2 − n)r(n − u2 ) The result represents a particular case of Exercise 8 on page 427 of [81], and so its proof is omitted.
262
4. MULTISET INFORMATION THEORY AND ENCODINGS
5.10. Magic Number FOUR in Formal Language Theory. It is somehow amazing that we get a result emphasizing the importance of number FOUR which is also the number of nucleotides in DNA. We find that the number four is also involved in other problems concerning languages, and we mention few of them. Definition 4.56. Let Σ be a non-empty alphabet. • An abelian square over Σ, is a non-empty word of the form w1 w2 such that w1 , w2 have the same Parikh image, or, in multiset terms, they are represented by the same multiset. • A word is abelian square free if it does not contain any abelian square as a subword. • It is said that abelian squares are avoidable over Σ, if it is possible to construct arbitrarily long abelian square free words over Σ. It was Paul Erd¨os who asked, in 1961, whether the abelian squares can be avoided in infinitely long words. As can be easily seen, the abelian squares cannot be avoided over a three letter alphabet, since each word of length 8 contains an abelian square. V. Ker¨ anen proved in [58] that, in fact, the smallest alphabet where abelian squares can be avoided in infinitely long words has four letters. Ker¨ anen also asked: ”Could this special state of affairs be in connection with the genetic code used by nature?” Other related results are presented in [59]. It would be interesting to study the connections between the theory of abelian squares and the theory of multisets. The importance of number four is also emphasized by the results of A.Salomaa in [100]. According to the results presented in [92], there are many ways of constructing DNA-based computers, and Watson-Crick complementarity guarantees the universal computations in any model of DNA computers. A.Salomaa asserts in [100] that this is a consequence of the close similarity between the Watson-Crick complementarity and the twinshuffle languages. A twin-shuffle language is a language over four letters, and it is powerful enough to serve as a basis for arbitrary computations; this is viewed as a formal explanation that the number of nucleotides in DNA is four. It is claimed that the number four of the bases A, G, T, C is ideal: we get exactly the alphabet of a twin-shuffle language. Watson-Crick complementarity amounts to the presence of the twin-shuffle language, and the universality results remain valid under several restrictions of the double strands.
Bibliography [BCI0] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa, D. Petcu. A Web-Based P Systems Simulator and its Parallelization. Lecture Notes in Computer Science vol.3699, Springer, 58–69, 2005. [BCI1] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa. Encodings and Arithmetic Operations in Membrane Computing. Lecture Notes in Computer Science vol.3959, 618–627, 2006. [BCI2] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa. Number Encodings and Arithmetics over Multisets. Proceedings SYNASC’06, IEEE Computer Society, 354–361, 2006. [BCI3] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa. Compositional Asynchronous Membrane Systems. Progress in Natural Science vol.17, 411–416, 2007. [BCI4] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa. Information Theory over Multisets. Computing and Informatics vol.27, 441–451, 2008. [BCI5] C. Bonchi¸s, G. Ciobanu, G. Ghergu, C. Izba¸sa. Data Compression on Multisets; Submultiset-free Codes. Proceedings SYNASC’08, IEEE Computer Society, 152–157, 2008. [CG1] G. Ciobanu, M. Gontineac. Mealy Multiset Automata. International Journal of Foundations of Computer Science vol.17, 111–126, 2006. [CG2] G. Ciobanu, M. Gontineac. Encodings of Multisets. International Journal of Foundations of Computer Science vol.20, 381–393, 2009.
263
CHAPTER 5
Modelling Power of Membrane Systems 1. Modeling Cell-Mediated Immunity by Membrane Systems The immune system represents the natural defense of an organism. It comprises a network of cells, molecules, and organs whose primary tasks are to defend the organism from pathogens, and to maintain its integrity. Since our knowledge of the immune system is still incomplete, formal modeling can help provide a better understanding of its underlying principles and organization. In this section we provide a brief introduction to the biology of the immune system, recalling several approaches used in the modeling of the immune system, and then describe a model based on membrane systems. The immune system is a complex network of cells and molecules whose primary tasks are to defend an organism from potentially dangerous foreign agents, and to maintain the integrity of the organism. Foreign agents include toxins, bacteria, fungi, parasites, viruses, various environmental and self-produced antigens. In this section we present some basic immune system concepts; further details can be found in immunology textbooks. The most important function of the immune system is the self/nonself recognition that enables an organism to distinguish between the harmless self and the potentially dangerous nonself. The mechanism of self/nonself recognition is mediated through major histocompatibility complex (MHC) molecules binding short peptides intracellularly, and transporting them to the cell surface for recognition by the T cells of the immune system. These peptides act as markers; cells presenting self-peptides are tolerated, while those presenting foreign peptides are subject to various immune responses. We have two fluid systems in the body: blood and lymph. The blood and the lymphatic systems are responsible for transporting the agents of the immune system across the organism. Lymphocytes are the most important cells for adaptive immunity. They circulate in both the blood and lymphatic systems, and make their home in lymphoid organs. The lymph nodes are the usual places where antigens are presented to the cells of the immune system. We have two main functionalities of the immune system: innate immunity, and adaptive immunity. We are born with a functional innate immunity system which is nonspecific; all antigens are attacked pretty much equally. This innate immunity represents the first defense of the organism, a defense achieved by many actions and components, such as surface barriers 265
266
5. MODELLING POWER OF MEMBRANE SYSTEMS
and mucosal immunity, chemical barriers, and normal flora (microbes living inside and on the surface of the body). The cells involved in the innate immune system bind antigens using hundreds of pattern recognition receptors. These receptors are encoded in the germ lines of each person; this immunity is passed from generation to generation. In this chapter we concentrate on adaptive immunity, which is more interesting for formal modeling. Adaptive Immunity. Adaptive immunity (or acquired immunity) is a function of the immune system given by the fact the immune system has to learn the specific antigens before it can actually remove them from the organism. Adaptive immunity is developed and modified throughout the life of the host organism. The adaptive immunity appears to be a distributed system with a sort of coordination control, able to perform its complex task in an effective and efficient way. The most important components of adaptive immunity are the major types of lymphocytes: T cells, and B cells. Peripheral blood contains 20%-50% of circulating lymphocytes; the rest move in the lymph system. Roughly 80% of them are T cells, 15% are B cells, and the remaining are null or undifferentiated cells. Their total mass is about the same as that of the brain. B cells are produced from the stem cells in bone marrow; B cells produce antibodies and oversee humoral immunity. T cells are nonantibody-producing lymphocytes which are produced in the bone marrow, but sensitized in the thymus. Parts of the immune system are changeable and can adapt to better attack the invading antigen. There are two fundamental adaptive mechanisms: humoral immunity, and cellmediated immunity. Humoral immunity is mediated by serum antibodies, which are proteins secreted by the B cell compartment of the immune response. Cell-mediated immunity consists of the T cells. Each T cell has many identical antigen receptors which interact with antigens. Cell-Mediated Immunity. Phagocytes are cells able to attract, adhere to, engulf, and ingest foreign bodies. Promonocytes are made in the bone marrow, then released into blood, and called circulating monocytes which mature into macrophages. Once a macrophage phagocytizes a cell, it places portions of its proteins, called T cell epitopes, on the macrophage surface. These surface markers serve as an alarm to other immune cells which then infer the form of the invader. Macrophages engulf antigens, process them internally, and display parts of them on the cell surface together with MHC molecules. This mechanism sensitizes T cells to recognize the antigens. All cells are coated with various molecules and receptors. CD stands for cluster of differentiation, and there are more than one hundred and sixty clusters, each of which is a different molecule that coats the surface. Every T cell has about 105 molecules on its surface; T cells have CD2, CD3, CD4, CD28, CD45R, and other non-CD molecules on their surfaces. This large number of molecules residing on the surfaces of lymphocytes produce huge receptor variability. They produce random configurations on the lymphocytes surfaces; there exist around 1018 structurally different receptors.
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
267
An antigen is likely to find a near-perfect fit with a very small number of lymphocytes. T cells are primed in the thymus, where they undergo two selection processes. The first positive selection process weeds out only those T cells with the correct set of receptors that can recognize self-peptides presented by the MHC molecules. Then a negative selection process begins whereby T cells that can recognize MHC molecules complexed with foreign peptides are allowed to pass out of the thymus. Cytotoxic or killer T cells (CD8+) do their work by releasing lymphotoxins, which cause cell lysis. Helper T cells (CD4+) serve as regulators, and trigger various immune responses. They secrete chemicals called lymphokines that stimulate cytotoxic T and B cells to grow and divide, attract neutrophils, and enhance the ability of macrophages to engulf and destroy microbes. Suppressor T cells inhibit production of cytotoxic T cells, providing a mechanism for preventing the self-damage of healthy cells by the immune responses. Memory T cells are programmed to recognize and respond to a pathogen previously encountered by the organism.
1.1. Modeling in Immunology. In this section we briefly present some continuous and discrete models of the immune system. The main problem of the immune system is to distinguish between self and nonself. We can say that the success of the immune system is dependent on its ability to distinguish between harmful nonself and everything else. This problem is difficult because there are ∼ 1016 patterns in nonself, and they have to be distinguished from ∼ 106 self patterns. Moreover, the environment is highly distributed, resources are deficient in quantity compared with the demand, and the host organism must continue to work all the time. The immune system solves this problem by using a multilayered architecture of barriers, namely a physical barrier (the skin), physiological barriers (e.g., the pH values), and cells and molecules of the innate and adaptive immune response. The resulting system is flexible, scalable, robust, and resilient to subversion. The immune system models are mostly based on two biological theories of the immune system, namely the clonal selection theory, and the idiotypic network theory. The clonal selection theory is described by Nobel Prize winner F. Burnet, following the track first highlighted by P. Ehrlich at the beginning of the twentieth century. The theory of the clonal selection states that the immune response is the result of a selection of the right antibody by the antigen, much like the best adapted individual is selected by the environment in the theory of natural selection. The selected subsets of B cells and T cells grow and differentiate; then they turn off when the antigen concentration falls below some threshold. The idiotypic network theory is formulated by Nobel Prize winner N.K. Jerne. According to the idiotypic network theory, the immune system is a regulated network of molecules and
268
5. MODELLING POWER OF MEMBRANE SYSTEMS
cells able to recognize one another even in the absence of antigens. The idiotypic network hypothesis is based on the concept that lymphocytes are not isolated, but communicate with each other. As a consequence, the identification of antigens is not done by a single recognizing set, but by several sets connected by antigen-antibody reactions as a network. Jerne suggested that during an immune response, antigens directly elicit the production of a first set of antibodies, Ab1 ; then these antibodies act as antigens, and elicit the production of a second set of “anti-idiotypic” antibodies Ab2 which recognize idiotypes on Ab1 antibodies, and so forth. The immunologists consider these two independent theories as being consistent and complementary with each other. The clonal selection theory is considered to be important for a global understanding of the immune system. The idiotypic network theory is useful for understanding the existence of anti-idiotypic reactions, and the immune responses. Most continuous models have been formulated using the framework of both immunological theories, while the discrete models are mainly based on Jerne’s theory. The systemic models of immune responses have mainly been devoted to collective actions of various immune system components. These models do not study single cells or single molecules, but rather focus on cell interactions and collective behavior in activation, control, and supply of the immune responses. Continuous Models. We briefly point out the main ideas, together with the advantages and disadvantages of some continuous models of immune systems. A good survey of the continuous models can be found in [94]. Continuous models describe the time evolution of concentrations of cellular and molecular entities which play a significant part in the immune system. Each of these models works with continuous functions defining the concentration (number of elements/volume) of cellular and molecular entities. These models use systems of nonlinear ordinary differential equations, mainly representing conservation laws, in which unknowns represent the concentrations. Some models assume that concentrations do not depend on space variables, and interactions between entities occur with uniformly random collisions. When concentrations do depend on the space variables, the problem becomes mathematically more difficult, and partial differential equations are needed. The advantages of this approach are given by fact that the mathematical methods have a well known and theoretically established background, and the behavior of the solution can be described by qualitative and asymptotic analysis. Moreover, if a numerical solution is needed, then well known numerical methods are available. However, there are some biological disadvantages. The approximations necessary to keep the equations tractable may not be biologically evident, and so the model may move away from real biology. Worse, most of these models fail when the concentrations of some entities decrease drastically. The models are not compositional, and
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
269
inserting new biological details may change the mathematical structure of the model. The general framework of the continuous models is given by nonlinear equations describing generic iteration systems. They have a general form ∂t x = G(x) − L(x), where x = {x1 , . . . , xn } is a vector describing concentrations, and the vectors G(x) and L(x) represent the gain and loss terms, respectively. The solution of these equations is represented by a curve in the n-dimensional state space describing the time evolution of the system starting from the initial condition x∗ (t = 0). The qualitative behavior of the solutions is usually investigated, including stationary points (fixed points), local and global stability, attractors, limiting cycles describing periodic behavior, and strange attractors describing chaotic behavior. In these models it is crucial to properly describe the affinity between cells and molecules. The simplest affinity function is a bilinear form; more detailed models need more complex nonlinear terms. In order to make them more realistic, there are some possible improvements of the continuous models. For instance, considering a time delay for the interactions between entities, the behavior of the delayed systems may be qualitatively different (e.g., the stability of fixed points can change). Usually new cells come into the system either from bone marrow or following some hypermutations. This could be modeled using a stochastic source term in the equations; the overall behavior can be very different. Discrete Models. Discrete models consider individual entities as primitives, and the whole system dynamics arises from their collective actions. These models use various mathematical techniques such as the generalized Boltzmann equation, cellular automata, and lattice gas. Most of these approaches are widely used to study complex systems, and are based on computer simulations. Good surveys of the discrete models can be found in [94]. An approach is based on generalized Boltzmann equations. Models based on cellular automata and lattice gas have been developed over the last twenty years, producing interesting results. Automata-based models are used to investigate the logic of interactions among a number of different cell types and their outcomes in terms of immune responses. The rules modeling the dynamic evolution of these automata-based models are expressed by logical operations. Application of the rules is iterated over discrete time. Some of the discrete models bring into the field the experience of computer scientists. The guiding line of these approaches is a deeper comprehension of immune system by describing and using immune system information processing in applications. There are already several applications of the artificial immune system. A real advantage of these models is that they can be built using biological language, with biologically relevant approximations. New biological details are easy to insert without changing the mathematical structure of
270
5. MODELLING POWER OF MEMBRANE SYSTEMS
the model. On the other hand, qualitative and asymptotic analysis are no longer possible, or quite difficult. Modeling Cell-Mediated Immunity. T cells play a central role in the cell-mediated immunity. They orchestrate the immune responses to foreign aggressors. When a T cell recognizes a foreign antigen, it initiates several signaling pathways, and the cell activates. The recognition of a foreign antigen is an extremely sensitive, specific, and reliable process; the models for T cell signaling network can help us understand how these features arise and work. So far, the study of T cell activation has benefited from the use of some mathematical models. A key event for T cell activation is an appropriate interaction between the T cells armed with T cell antigen receptors (TCRs) and the professional antigen presenting cells (APCs). TCR recognizes only the foreign antigen in the form of short peptides presented in the groove of a molecule on the surface of the APC known as the major histocompatibility complex MHC. The physical interaction of TCRs with the peptide complexes is unique among signaling systems by its taking place over a continuous binding value process. The recognition of antigen initiates signal transduction. This can be broken down into series of discrete steps related to various molecular events (interactions, state transitions) within the signaling pathways. These are shown in Figure 1. T cell responses show a hierarchical organization depending on the extent of TCR occupancy, the duration of antigen binding, the timing of encounters, and the engagement of costimulatory receptors. TCR is a very complex structure composed of a minimum of eight strongly associated chains. The actual arrangement and stoichiometry of CD3 and TCR chains within TCR complex is not entirely known. We refer in this chapter to a part of the signaling network, namely the activation of Zap70 and the phosphorylation of the adapters LAT , which are essential for connecting to the major intracellular signaling pathways Ca+2 /calcineurin and Ras/Rac/MAPK kinases. Although many other receptor interactions may contribute positively or negatively to the quality and the quantity of the T cell immune responses, TCR signaling upon antigen recognition determines a certain response (see [CTDHM]). In this chapter we present a model of the T cell signaling network by a distributed version of P systems. Before presenting this version, let us emphasize why the P systems are suitable for modeling the immune systems. The immune system has more subsystems, each with its own rules. This structure can be represented faithfully by a P system where each subsystem is modeled by a membrane. The P system application of rules in a maximally parallel manner expresses the natural competition for scarce resources in the immune system. Communication and coordination is essential, and thus we consider P systems with symport/antiport rules of communication among membranes. Since the immune system environment is highly distributed,
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
271
Figure 1. TCR Signaling Pathways. we introduce a P system using the well known client-server paradigm used in computer networks and distributed systems. 1.2. Client-Server P Systems. In order to strengthen the connection between P systems and biological systems, we introduce, study, and use a new version of P systems called client-server P systems (CSPSs) to model the T cell signaling pathways and T cell activation [CDHMT]. Formally, we start from a particular variant of P systems, namely P systems with symport/antiport rules. The specificity of this type of P systems lies in the form of their rules, which can be one of: • (ab, in), (ab, out): objects a and b can pass through a membrane only together, in the same direction (symport rules), and • (a, out; b, in): objects a and b can pass through a membrane only together, but in different directions (antiport rules). Theoretical results regarding this type of P system can be found in [90]. Generally, these results take into consideration the number of membranes and the weight of the port (i.e., the number of objects involved in a symport or antiport rule).
272
5. MODELLING POWER OF MEMBRANE SYSTEMS
We formalize a client-server model according to the following description. The clients are characterized by their states, while the server stores the current states of clients and interaction rules defined over the states. When two clients can interact, the server notifies them, supplying at that time the corresponding rule. The clients interact and send their new states to the server, thus making the model consistent. A client-server P system (CSPS) is a P system composed of elementary membranes (except the skin), with state objects modeling the states of the clients, and rule objects modeling the communication between clients. The communication is of symport type. In formal notation, the CSPS contains the skin membrane (numbered 1), together with m membranes representing clients (numbered 2 to m + 1), and a membrane for the server (numbered (m + 2)), all arranged inside the skin membrane. In the original approach of [CDHMT], a rule object ηα1 α2 α3 α4 α5 has the following meaning: two clients defined by states α1 and α3 can interact and pass to states α2 and α4 , respectively; at the same time, it is possible to get supplementary information α5 . Formally, a client-server P system is a construct Π = (V, µ, w1 , . . . , wm+2 , we , Me , R1 , . . . , Rm+2 , m + 2), where: (1) V = A ∪ B, with A, B disjoint sets such that: S • A = m+1 i=2 Si , where Si represents the states of client i; • B = {ηα1 α2 α3 α4 α5 | α5 ∈ Me ∪ {λ}, αi ∈ A ∪ Me , 1 ≤ i ≤ 4, where α1 + α2 ⇒ α3 + α4 + α5 is an interaction rule}, (2) µ = [1 [2 ]2 . . . [m+2 ]m+2 ]1 , (3) w1 = ∅, wm+2 = B ∪ Sinitial , where the initial state of the server is Sinitial = {s2 , s3 , . . . , sm+1 }, si ∈ Si , 2 ≤ i ≤ m+1 (the si represent the initial states of the clients), wi = Si \ {si }, si ∈ Sinitial , 2 ≤ i ≤ m + 1, (4) Me = A, (5) R1 = {(αj αk ηα1 α2 α3 α4 α5 , out) | j ∈ {3, 4}, k ∈ {1, 2}, j − k 6= 2, αk , αk+2 , α5 ∈ Me , αj , αj−2 ∈ A} ∪ {(α3 α4 ηα1 α2 α3 α4 α5 , out) | αi ∈ A, 1 ≤ i ≤ 4, α5 ∈ Me } ∪ {(α3 α4 α5 ηα1 α2 α3 α4 α5 , in) | αi ∈ A ∪ Me , 1 ≤ i ≤ 5}, Rm+2 = {(α1 α2 ηα1 α2 α3 α4 α5 , out), (α3 α4 α5 ηα1 α2 α3 α4 α5 , in) | αi ∈ A ∪ Me , 1 ≤ i ≤ 4, α5 ∈ Me ∪ {λ}}, Ri = {(αj ηα1 α2 α3 α4 α5 , in), (αj+2 ηα1 α2 α3 α4 α5 , out) | j ∈ {1, 2}, αj , αj+2 ∈ Si }, 2 ≤ i ≤ m + 1.
Inside the server membrane (the one with label m+2) there are state objects (representing the current states of the clients) and rule objects. When two state objects can be combined according to a rule given by a rule object, the server membrane gives a “signal” to the respective client membranes. The meaning of the rule (α1 α2 ηα1 α2 α3 α4 α5 , out) ∈ Rm+2 is the following: the clients represented by membranes p and q, where α1 ∈ Sp and
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
273
α2 ∈ Sq , can interact according to the rule described by the rule object ηα1 α2 α3 α4 α5 ; as a result, these three objects (the current states and the rule object) exit the server region. The client membranes p and q involved absorb their own state objects and the rule object by means of their corresponding rules (α1 ηα1 α2 α3 α4 α5 , in) ∈ Rp or (α2 ηα1 α2 α3 α4 α5 , in) ∈ Rp (and similarly for membrane q). Then they release for further use their new states and the rule object into the skin membrane, by (α3 ηα1 α2 α3 α4 α5 , out) ∈ Rp or (α4 ηα1 α2 α3 α4 α5 , out) ∈ Rp (and similarly for membrane q). If α5 6= λ, the supplementary information is brought in from the environment with rules from R1 . We emphasize the fact that the notifications of clients, and the interactions between them take place in a parallel manner. Client-server P systems were theoretically investigated in [CDHMT], where it is proved that CSPSs of degree at most 4 and using symport rules of weight at most 4 are computationally universal. However, our goal here is to emphasize the use of P systems in modeling molecular biology, particularly in adaptive immunity. In order to make the theory able to describe the T cell signaling network, we adopt the following refined approach from abstract models to software experiments. We consider that the process of modeling and simulation implies three basic objects: the real system, the abstract model, and the simulator, together with two main relations: the modeling relation, which ties the real system to the model, and the simulation relation, which connects the model and the abstract simulator. Finally, the abstract simulator helps us design a faithful computer implementation able to ensure useful software experiments. These steps are described in the following picture:
T cell
CSPsim (simulator)
signaling network
implementation modeling
simulation CSPS model
MOlNET (software)
1.3. Client-Server P Simulators. Aiming to add both qualitative and quantitative features of the T cell signaling network, starting from CSPS, we define a client-server P simulator (CSPsim for short) as a set of communicating automata, together with appropriate internal transitions for each component, and communication steps between components. This approach leads to a novel software tool composed of a data server and its clients, together with a graphical tool for a visual representation of a molecular network. It is designed to work over computer networks, using the power of several processors and systems. It is platform-independent, and able to work over heterogeneous networks.
274
5. MODELLING POWER OF MEMBRANE SYSTEMS
For each membrane of a CSPS, we define an automaton consisting of ports carrying input and output values, states, internal and external transitions, and timings. The structure of a CSPsim is defined as a set of automata together with associated interaction partners. Each membrane of a CSPS corresponds to an automaton in CSPsim. A state of the CSPsim has a set of rules (reactions) involving various membranes. We emphasize the existence of a coordinator (server) that controls a certain number of client membranes, and recomputes the new structure and rules of the system based on the least putative times of reactions. When a reaction occurs, the state of the simulator changes discretely, step by step. This means that the nonreactive collisions are ignored. It is possible to use a single automaton for a certain type of membrane, and therefore simulating a real system becomes more tractable. For each membrane i we define an automaton M = (X, S, Y, δ int , δ ext , λ, τ ), where • X = {(p, v) | p ∈ IP orts, v ∈ Xp }, where IP orts is a set of input ports, and Xp is a set of possible input values; • Y = {(p, v) | p ∈ OP orts, v ∈ Yp }, where OP orts is a set of output ports, and Yp is a set of possible output values; • S is the set of states; • δ int : S → S is the internal transition function; • δ ext : Q × X → S is the external transition function, where Q = {(s, e) | s ∈ S, 0 ≤ e ≤ τ (s)}, and e is the elapsed time since the last transition; • λ : S → Y is the output function; • τ : S → R+ is the time advance function. Let us describe how a CSPsim is essentially simulating a molecular Pinteraction. In biology, the interaction rules are given by α1 A + α2 B → βi Ci , where α1 , α2 , and βi represent multiplicities. The executive has specific input and output ports for each membrane client. In a certain configuration, the executive selects a rule that could be applied, i.e., check if the members of the left side of the rule are available in the current configuration. Then the executive sends to the clients involved in this rule a message describing the rule, by using its transition function δ. Each client receiving such a message uses its own transition function, and then send an acknowledgement to the executive. After receiving the acknowledgements from the clients, the executive performs a transition, and changes to a new configuration. And so on. The executive receives from clients the necessary information regarding the reactions that took place, or unsuccessful attempts to react. When quantitative changes appear in a CSPsim, the executive recomputes the putative times for each reaction according to Gibson’s algorithm, and modifies the clients membranes such that for the next reaction the client selected to participate is the one with the least putative time. The results yielded by
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
275
the abstract simulation of a molecular network are strongly dependent on the algorithm used for choosing the performed reactions. Several algorithms were proposed to simulate the behavior of the biological systems. Our algorithm is based on widely accepted stochastic mesoscopic algorithms in biology. The mesoscopic view counts particles, but does not keep track of their position or momentum. Several interaction algorithms regarding the mesoscopic view of the physical biology were proposed. Gillespie developed two such algorithms, the Direct Method and First Reaction Method. At each iteration these algorithms generate some putative times for each reaction according to an exponential distribution. The reaction with the least putative time will be executed and the system will be updated accordingly. Both algorithms have the time complexity of O(r) for each iteration, where r is the number of reactions. Gibson improved the Direct Method algorithm giving the Next Reaction Method. This new algorithm uses Markov chains for choosing what reaction will be performed next. One major improvement consists of generating only one random number for each reaction. It also computes the putative times only for the reactions affected by the last executed reaction. The complexity of this algorithm for each iteration is O(log(r)), where r is the number of reactions. This algorithm is obviously more efficient than Gillespie’s algorithms. A client-server P simulator with two clients has the same computational power as a Turing machine. It is also possible to prove some important properties for our abstract CSP simulator, such as hierarchical, modular composition, universality, and uniqueness. These properties support the development of simulation environments. Thus an abstract CSPsim and its implementation provide support for building models in a hierarchical and modular manner. Moreover, within the framework presented for modeling and simulation we can prove, up to a specific modeling relation, that the simulations reflect faithfully the behavior of the real system. 1.4. Implementing T Cell Signaling Networks. The last step from modeling to software experiments is represented by the implementation of the abstract simulators. We consider generally the molecular network, and in particular the signaling network that grounds T cell activation. Molecular networks and computer networks look and behave similarly. In this context, a suitable approach to implement the abstract CSPsim for molecular networks is to develop a software system running over computer networks. We developed a novel software system called MOlNET as an implementation of our model for T cell molecular networks. MOlNET has two main entities: the data server and the clients. The data server is the implementation of the executive, and the clients implement the basic CSPsim structure simulating the CSPS membranes. The formal frameworks of the CSPS model and CSPsim ensure model validity, as well as simulator correctness. A simulator performs the implicit operations of the model. An implementation is able to perform the simulations described by CSPsim.
276
5. MODELLING POWER OF MEMBRANE SYSTEMS
We can assume that every membrane is implemented by software clients, the executive is implemented by the MOlNET server, and the communication between membranes is implemented by MOlNET communication protocols.
Graphical Server
Central Logger
Data Server
Simulation IO
Algorithm Component
Host Membrane Client
Generator
Client
Two way communication One way communication Execution Control One way communication
Host Membrane Client Local Logger
Local Logger
Client
Client
Generator
Client
Figure 2. MOlNET Architecture. The entire architecture is shown in Figure 2. The software has a modular architecture which allows us to easily integrate other tools, or even to use various interaction algorithms: • Graphical server: it ensures a user-friendly graphical interface. The user is provided with multiple facilities such as for saving and loading certain simulations, for viewing both input and output data, for modifying the system data while software experiments run, or for defining tracers providing charts for the quantitative evolution of different clients of the system. The graphic server is also responsible for distributing the processes over the available hosts by communicating with the client generator through a specific protocol. • Data server: it is the correspondent of the executive; its main role is to provide the clients with data regarding their interaction partners. • Client generator: it is a process responsible only for starting other processes (i.e., the client processes) at a specific host. • Clients: they correspond to client membranes of CSPS. According to the interaction algorithm, each client initiates reactions with its interaction partners, and participates in the reaction results. In this way we avoid getting any other client involved in a specific interaction. The clients keep track of the quantities involved in
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
277
reactions. For each component which appears in a tracer, its client informs the graphical server about the quantitative modifications. The output data consists of tracers. Each tracer is given for groups of clients. It provides both an output file and a visual representation (chart) of the evolution in time of the given clients. The user is provided with an intuitive graphical interface for introducing the input data for a simulation. The input data window includes several sections corresponding to various molecular components, membranes, membrane rules, and their locations (hosts). A snapshot is given in Figure 3.
Figure 3. Screenshot of the MOlNET Software. For implementation we use the C programming language, BSD socket interface for network communication, and GTK 2.0 for the graphical user interface. This implementation uses computer networks for distributing the computation, having the advantage of executing simulations over a large number of membranes, and so providing valuable and relevant data about the behavior of particular molecular networks. Let us now return to modeling the behavior of the T cell signaling network. T cell activation is initiated by the interaction between the TCR arms of T cells and the professional antigen presenting cells. TCR recognizes the foreign antigen in the form of short peptides presented in the groove of a molecule on the surface of the antigen presenting cells. The recognition of antigen initiates signal transduction. This can be broken down into series of discrete steps that are related to various molecular events within the signaling pathways. In order to illustrate the evolution of the model, we consider only a part of the signaling network, namely the reaction between Zap70 and LAT , followed by the binding of GADS ⊕ SLP -76 to phosphorylated LAT . Zap70 and LAT correspond to CSPS membranes, and CSPsim
278
5. MODELLING POWER OF MEMBRANE SYSTEMS
clients. First there is a communication between Zap70 and LAT , followed by the rules of these two components; they wait in their states for the time indicated by their reaction, then both send a message to that executive. The executive modifies the system so as to reflect the quantitative aspect of the reaction. The executive also modifies the transition rules from the initial state for these two components, allowing the binding of the complex GADS ⊕ SLP -76 to phosphorylated LAT . We simulate and analyze the interactions that drive the T cell behavior by using our approach based on client-server P systems and simulators. The software experiments provide data that can be interpreted with statistical methods. During these experiments we systematically perturb and monitor the signaling network outcomes, by adding or deleting new components, modifying the quantities and rates, establishing new interactions between components, or removing existing ones due to certain mutations. In this way, we can represent various factors (the number of triggered TCRs, the presence or absence of costimulation) which play a role in determining the outcome of the T cell. Software experiments can contribute to explaining how these factors determine differences in the formation and composition of the TCR signaling complexes, and how they drive various biological consequences of T cell signaling networks.
1.5. Software Experiments and Their Biological Relevance. Biological behavior is strongly influenced by the ability of molecules to communicate specifically with each other within large molecular networks. Crucial for the T cell behavior is the signaling network that could engage various cell responses due to potentially different signal types, quantities, and durations. We describe the network behavior both qualitatively and quantitatively. Many studies on T cell biology have revealed different types of functional responses (activation, proliferation, anergy, cell death) to different TCR stimulations. It is known that TCR engagement under some circumstances leads to proliferation and effector function, while under other conditions TCR stimulation leads to anergy. The factors that shape the response to antigen are the concentration of antigen, the duration of antigen binding, the timing of encounters, and the engagement of other receptors (such as CD28 or CTLA4). These T cell responses can be broken down into a series of discrete steps that are related to molecular events within a larger molecular network. Recent data suggest that unbalanced activation of NFAT relative to its cooperating AP-1 imposes the genetic program of T cell anergy that opposes the program of activation-proliferation mediated by NFAT-AP1 complex. Based on these results, we simulate the molecular network that drives T cell activation or T cell anergy. The results obtained from experiments are statistically processed and interpreted. One of the main goals of statistics is inferring conclusions based solely on a finite number of observations about events likely to happen an
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
279
indefinite (infinite) number of times. Nonetheless, the strength of conclusions yielded depends on the sample size on which the analysis is based. In fact, in many cases a contradiction appears between the minimum size required by statistics in order to make methods applicable, and the size that biological wet experiments can provide. This is why faithful computer simulators for biological processes are needed: the use of such tools overcomes the problems of budgeting, since the cost per software experiment is low in comparison with the biological lab experiments. Therefore, data sets of desired size can be obtained, thus allowing for correct statistical inferences and hypothesis testing.
Figure 4. MOlNET Experiments Regarding T Cell Activation: ratios TCR/CD28 and NFAT/AP1 have similar trends. We ran some experiments with our MOlNET software, using different input amounts for TCR and CD28 so as to vary their ratio across an interval. The output data of interest was the ratio between NFAT and AP1, the proteins with the main role in deciding the T cell behavior. In Figure 4 a representation of data yielded by experiments can be seen. The number of molecules were varied over the interval 103 to 105 for each component, and 10−3 to 10−5 for the Michaelis-Menten constant of enzymatic reactions. However, these may not always hold true, and further restrictions may be imposed. On the other hand, they could reflect the diversity of molecular environments throughout T cell population. To be rigorous, each reaction may be varied in terms of number of molecules that participate, and in terms of kinetic rates. Varying these parameters for all reactions in the network produces a huge number of software experiments. We have applied the bootstrap method in the case of the ratio between quantities of NFAT and AP1. After generating 500,000 bootstrap samples from the original ratio sample, we obtained an estimate of the mean equal to 0.8023, as compared to the sample average of 0.8071. The full range of the bootstrap estimates was (0.2140, 2.4179), and from it we were able to indicate a 95% confidence interval for the mean (0.7071, 1.9003). For comparison, the 95% confidence interval obtained by using the classical ttest was (0.7220, 0.8921), which shows that the population of ratios has
280
5. MODELLING POWER OF MEMBRANE SYSTEMS
a distribution very close to normal. Moreover, by applying the bootstrap method we obtain a similar conclusion regarding the ratio TCR/CD28. Other software experiments we made were related to the Lck recruitment model, that involves successive activations and inactivations of Zap70, Lck, and phosphatase. We ran several tests regarding the quantitative aspects of this particular interaction network. A chart representing the distribution of the quantitative modification corresponding to each of substances involved is presented below (we used 1.2 for a 20% increase).
It can be seen that the samples follow a normal distribution, with the mean falling on either side of 1.0 mark, depending on the evolution of the system – for example, the mean for inactivated Lck is clearly less than 1 while the mean for its counterpart activated Lck is greater than 1. Furthermore, based on the data obtained, we could predict the 95% confidence interval for the mean of activated Lck modification as the interval (0.91; 1.14). This way, the data yielded by using MOlNET can be analyzed to find statistical correlations of the mechanisms inside the cell; it is also possible to make other predictions regarding the T cell signaling network. Tuning the T Cell Activation Thresholds. T cells exert important control over the immune system. Therefore the fine-tuning of T cell activity can have great consequences on the responses that the immune system triggers against viruses or bacteria, as well as on the development of autoimmune diseases. It is very useful to see how a specific molecule type, namely Cbl-b, could tune the threshold required for T cell activation. More complex molecular networks that trigger two qualitatively different cell responses (full activation and anergy) were considered [CTDHM]. These results, together with other wet lab data, may open new perspectives in pharmacological manipulation of immune responses. Drugs may trigger, enhance, diminish, or stop the ways in which T cells respond, adjusting the expression level or activity level of the signaling proteins. We assume that
1. MODELING CELL-MEDIATED IMMUNITY BY MEMBRANE SYSTEMS
281
simulations of T cell signaling mechanisms could reveal useful information on immunodeficiencies, autoimmune disorders, vaccine design, as well as the function of healthy immune system. T cell activation is a threshold phenomenon that is dynamically modulated during cell maturation. It reflects the signal intensity that is necessary to increase the expression of specific genes (e.g., IL-2 gene). Both the emergence of threshold and its tuning depend on dynamic interplay between positive and negative factors. As T cells receive many signals from self antigens, they have to adapt their activation thresholds in such a way that self-stimuli fall under the threshold and consequently no response is elicited against self. Furthermore, nonself antigens provide stronger signals that overcome the activation threshold, and as a consequence the cell activates and produces a certain immune response. In the following discussion, we investigate the role of Cbl-b in tuning the activation thresholds. Basically, we look for the influences that Cbl-b exerts on the level of activated Zap70. Then we add Cbl-b, and the levels of Zap70* are measured. High levels of Zap70* may trigger cell activation, while levels below the threshold do not have this effect. The expression levels of various signaling proteins vary during immune cell maturation (e.g., the level of Lck declines during development while the level of SHP phosphatase increases); our experiments consider the heterogeneity of activation thresholds at the level of population of T cells (or T cell clones) rather than during the development of a single clone. CD45
Lck
6
1
2
Lck*
5
PTP*
PTP
3
Zap70**
Zap70*
Zap70
7
4
8 Lck*
Cbl−b
Cbl−b*
9 Lck**
Figure 5. Cbl-b Alters the Signaling Pathways.
282
5. MODELLING POWER OF MEMBRANE SYSTEMS
Recent reports highlight that Cbl-b is a key regulator of activation thresholds in T cells. Many proteins are associated with Cbl-b, including Lck* and Zap70*. Cbl-b mediates chemical modification (ubiquitination) of these activated kinases that target them for degradation (reactions 8 and 9 in Figure 5). The specific chemical modifications due to ubiquitination are denoted by “**”. Degradation of active kinases results in the reduction of the activation of downstream signaling proteins. Furthermore, degradation of Lck can reduce the activation of Zap70, as shown in Figure 5. These events raise the threshold requirements for cell activation and prevents the development of autoimmunity. Moreover, following TCR ligation, Zap70* activates Cbl (reaction 7). Additionally, CD45 activates Lck (reaction 6). All these molecular events finely tune the signal intensity in such a way that it draws nearer to, or deviates from the activation threshold. The changes in the Lck*/Lck-total ratio, as well as in the Zap70*/Zap70-total ratio are shown in Figure 6. If the amounts of Cbl-b (and Cbl-b*) vary, and amounts of Lck (and Lck*) vary as well, then Zap70*/Zap70-total ratio still has slight variations (as in Figure 6). But when the amounts of Cbl-b and Cbl-b* equal 500,000 molecules, for an activation threshold set around 0.45 (that is, Zap70*/Zap70-total = 0.45, a thin horizontal line in our picture), the cell could either activate or not (during experiment 2, the signal intensity represented by a gray continuous curve is below the threshold, while in experiment 3 the threshold is overcome). These outcomes are produced by differentially regulating the amount of Lck* within the cell. In other words, Cbl-b fine-tunes T cell reactivity, and this also depends on the amount of Lck*. The software experiments are represented by numbers from 1 to 4 along the horizontal coordinate in Figure 6. The input values of Lck and Lck* varied from 10,000 to 100,000, and from 500,000 to 1,000,000 molecules, while the input values of Zap70 and Zap70* were kept constant: 100,000. Cbl-b and Cbl-b* were first set to 10,000 (dark lines), and were then set to 500,000. The kinetic constants associated to the corresponding reactions of Figure 5 were k1 = 0.001, k2 = k3 = k4 = k5 = k6 = 1, k7 = 0.1, and k8 = k9 = 0.01. Lck-total = Lck+Lck* and Zap70-total = Zap70 + Zap70*. 1.6. Conclusions. P systems were not initially created to model biological systems, although similarities can be observed. Despite many results of universality and several formal language problems which could be explained in an easier and elegant manner, it is useful and desirable to have more connections with applied computer science and molecular biology. Trying to strengthen the connections between P systems and molecular biology, we present a new version of P systems related to the client-server model used in computer networks. We use this version of P systems called client-server P systems to model the signaling network of the T cell. Then we introduce the client-server P simulators; these abstract simulators represent a useful intermediate step from a formal theory suitable for theoretical results to a
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
283
Figure 6. Lck*/Lck-total and Zap70*/Zap70-total levels after TCR Triggering and Cbl-b Activation. software implementation of a molecular network. Using the abstract simulators called CSPsim, we design a software environment called MOlNET. Various software experiments (e.g., for tuning the activation thresholds) take into consideration both qualitative and quantitative aspects. In this way we get relevant biological information on T cell behavior, particularly on T cell responses. The simulations explain how various factors play a certain role in determining the T cell response, relating input and output values of T cell mechanism. In particular, we see how a Cbl-b could tune the threshold required for T cell activation. As far as we know, the existence of a server (an executive) for the molecular interaction is a new feature; the existence of a coordinator in molecular processes was recently advanced by some biologists. Therefore it is difficult to compare the architecture of our system with other systems. Regarding its functionality, we can mention its flexibility, modularity, and expressive power. The described approach may serve as a platform for further experimental and theoretical investigations. 2. Membrane Description of the Sodium-Potassium Pump The sodium-potassium pump is a fundamental transmembrane protein present in all animal cells. The functioning of the pump is described and analyzed in the formal framework of P systems, considered here as tools for modelling a bio-cellular process. New features such as variable membrane labelling, activation conditions for rules, membrane bilayer and specific communication rules are defined, to the aim of providing a more appropriate description of the pump. A Sevilla carpet of the sodium-potassium pump is given, as a starting point to identify the pumps as the processors able to
284
5. MODELLING POWER OF MEMBRANE SYSTEMS
execute the rules of a high-level P system in a maximal parallel and nondeterministic manner, activated and controlled by steady-state concentrations. 2.1. Sodium-Potassium Exchange Pump. Cell membranes are crucial to the life of the cell. Defining the boundary of the living cells, membranes have various functions and participate in many essential cell activities including barrier functions, transmembrane signaling and intercellular recognition. The sodium-potassium exchange pump is a transmembrane transport protein in the plasma membrane that establishes and maintains the appropriate internal concentrations of sodium (Na+ ) and potassium ions (K+ ) in cells. By using the energy from the hydrolysis of ATP molecules, the pump transports three Na+ outside the cell, in exchange for two K+ that are taken inside the cell, against their concentration gradients. This exchange is an important physiologic process and it is critical in maintaining the osmotic balance of the cell, the resting membrane potential of most tissues, and the excitable properties of muscle and nerve cells. Here we model the movement of ions and the conformational transformations of the sodium-potassium pump in the framework of P systems, hence using discrete mathematics instead of partial differential equations. The membrane structure is identified with a string of correctly matching square parentheses, placed in a unique pair of matching parentheses; each pair of matching parentheses corresponds to a membrane. Each membrane identifies a region, delimited by it and the membranes (if any) immediately inside it. Usually, a unique label is univocally associated to each membrane. For instance, the string [0 [1 ]1 [2 ]2 ]0 identifies a membrane structure consisting of three membranes; the skin membrane is labelled with the number 0, the other two membranes are placed inside the skin at the same hierarchical level and are labelled with the numbers 1 and 2. The notion of membrane structure and the modalities of communication will be refined in order to give a more appropriate model for the sodium-potassium pump. The sodium-potassium pump (briefly, Na-K pump) is a primary active transport system driven by a cell membrane ATPase carrying sodium ions outside and potassium ions inside the cell. Many animated representations of the pump are available on the web, one can be found at http://arbl.cvmbs.colostate.edu/ hbooks/molecules/sodium pump.html. The description given in Table 1 is known as the Post-Albers cycle with occluded states. According to it, the sodium-potassium pump has essentially two conformations, namely E1 and E2 (both may be phosphorylated or dephosphorylated), which correspond to the mutually exclusive states in which the pump exposes ion binding sites alternatively on the intracellular (E1 ) and extracellular (E2 ) sides of the membrane. Ion transport is mediated by transitions between these conformations. During the translocation across cell membrane, there exist conformations in which the transported ions are
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
285
Table 1. The Post–Albers Cycle with Occluded States (22) (23) (24) (25) (26)
+ + 3N a+ cyt ⇋ E1 · AT P · 3N a
E1 · AT P · 3N a+ ⇋ E1 ∼ P · (3N a+ )occ + ADP
E1 ∼ P · (3N a+ )occ ⇋ E2 ∼ P · 2N a+ + N a+ ext E2 ∼ P · 2N a+ ⇋ E2 ∼ P + 2N a+ ext
+ E2 ∼ P + 2Kext ⇋ E2 ∼ P · 2K +
(27) E2 ∼ P · 2K + ⇋ E2 · (2K + )occ + Pi (28) E2 · (2K + )occ + AT P ⇋ E1 · AT P · 2K +
(29)
+ E1 · AT P · 2K + ⇋ E1 · AT P + 2Kcyt
occluded (trapped within the protein) before being released to the other side, and thus unable to be in contact with the surrounding media. Remark 5.1. In Table 1, A+B means that A and B are present together (e.g. in a test tube). A · B means that A and B are bound to each other noncovalently. E2 ∼ P indicates that the phosphoryl group is covalently bound to E2 . Pi is the inorganic phosphate group (i means inorganic). ⇋ indicates that the process can go either way, i.e. it can proceed in a reversible way. In Figure 1 we give a graphical representation of the conformations and the functioning of the pump: Na+ ions are pictured as small squares, K+ ions as small circle; for simplicity, neither ATP molecules nor phospates are represented. Let us consider an initial state, following the release of K+ ions to the cytosol (Figure 1, left middle), where the pump is in the conformation E1 , and it is associated with ATP (we describe it as E1 · AT P in Table 1). Its cation binding sites are empty and open to the intracellular space. In this situation, the affinity is high for Na+ and low for K+ . Consequently, three Na+ ions binds to the intracellular cation sites; this corresponds to the first equation of Table 1 and to the left up corner of Figure 1. The binding of sodium catalyzes a phosphorylation of the pump by the previously bound ATP: the γ phosphate of ATP is transferred to the aspartate residue of the pump structure. The new conformation of the pump is described as E1 ∼ P in Table 1. During this process, Na+ ions are occluded (Figure 1, up middle, and Table 1, equation (2)). Thereafter the pump undergoes a conformational change to the E2P state and loses its affinity for Na+ . The Na+ ions are subsequently released; first one Na+ ion is released during the conformational change from E1P to E2P when the cation binding sites are oriented toward the extracellular side (Figure 1, right up, and Table 1, equation (3)). The pump is in the E2P state, and the affinity for Na+ ions is very low; the two remaining Na+ ions are released into the
286
5. MODELLING POWER OF MEMBRANE SYSTEMS
E1P · (3N a)
E1 · AT P · 3N a `` ` !! !
E2P · 2N a
- Q Q " "
- Q Q " "
6 ? `` ` !! !
E2P
E1 · AT P
Q Q " "
6 ? fQ Q f" " E1 · AT P · 2K
Q Q " "
ff
E2 · (2K)
Q Q " "
ff
E2P · 2K
Figure 7. Sodium–Potassium Pump with Occluded States
extracellular medium (Figure 1, right middle, and Table 1, equation (4)). The binding sites now have a high affinity for K+ . Two external K+ ions can bind; this corresponds to equation (5) of Table 1, and to the right down corner of Figure 1. The binding of K+ at the outer surface induces the dephosphorylation of the E2P conformation, which turns to E2 . The release of the inorganic phosphate Pi into the intracellular medium is accompanied by the occlusion of the K+ ions (equation (6)). De-occlusion of K+ ions to the intracellular space is then catalyzed by ATP (equations (7) in Table 1 and Figure 1, left down corner): the pump returns to the conformation which has high affinity for sodium ions (namely, E1 ) and still presents the binding with ATP. The affinity for K+ ions reduces and they are released into the intracellular medium (equations (8) in Table 1). The pump protein is now ready to initiate a new cycle from the active conformation E1 · AT P (Figure 1, left middle). A detailed description of the overall functioning of the pump, as well as some graphical representations, can be found in [72]. Na-K pump is under the control of many regulatory mechanisms and pathways. For instance, the intracellular concentrations of ions determine the maximal activity of the pump: whenever cellular Na+ rises, the pump works more rapidly to expel the excess of Na+ , thus lowering its concentration to a steady-state value.
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
287
Regarding the relationship between the kinetic parameters of the transport process and the efficiency of the pump, we can mention that the rate constants of competing steps (that would decrease the efficiency) are quite small. This ensures that the binding and the release of substrate occur at the proper point in the cycle. For example, the reaction E1 · AT P ⇔ E1 ∼ P + ADP of equation (2) is slower than the reaction of equation (1). As a consequence, E1 has enough time to bind sodium ions before undergoing the transition to E2 . Similar relationships among rate constants ensure that ions are released from the enzyme before they come back to the side at which they were initially bound. In other words, the slow rate constants channel the enzyme along a reaction path in which the hydrolysis of ATP is tightly coupled to the transport process. 2.2. Modelling Na–K Pump with Membrane Systems. Since Na–K pump is a transmembrane protein associated with the phospholipid bilayer of the plasma membrane, it suffices to consider a membrane structure consisting of the skin membrane only. However, in order to formally describe the pump with a high resemblance to its biological structure and functioning, we have to introduce a notation for the cellular lipid bilayer. To this aim, we use two symbols of type “|” which, placed next to the couple of square parentheses denoting a membrane, characterize a further intermediate region: the skin membrane with bilayer will be denoted as [| |]. The skin membrane with bilayer characterizes now three distinct spaces, precisely the extracellular environment (in short, Env), the lipid bilayer of the membrane (Bilayer), the cytoplasm of the cell (Reg): Env [Bilayer| Reg |Bilayer] Env. In the following we will use only this semibracket notation for membranes. The conformations of the pump are described by means of labels attached to the membrane, that is [|l , with l ∈ L, L = {E1 · AT P, E1P , E2 , E2P }. We point out that, since we want to model the functioning of the pump during its transport activity, we will consider the conformation E1 only in the case the binding of an ATP molecule (which triggers the process) has already occurred, and we describe this situation with label E1 · AT P . The labels E1P , E2P correspond to the phosphorylated conformations of the pump with high affinity for sodium and potassium ions, respectively, while E2 correspond to the dephosphorylated conformation with high affinity for potassium ions, as already described. For the description of occluded states, we consider a subset Locc of L, namely Locc = {E1P , E2 }, where the first label denotes the occluded conformation for sodium ions, the second the occluded conformation for potassium ions. The alphabet of objects is V = {N a, K, AT P, ADP, P }, where symbols naturally represent the substances present in the cell and involved in the functioning of the pump. We consider also a second alphabet, Vocc = {N a, K}, to denote only those substances (sodium and potassium
288
5. MODELLING POWER OF MEMBRANE SYSTEMS
ions) which, at some time, are occluded within the pump, that is inside the bilayer region. The occlusion of these substances is expressed by means of overlined symbols, which will be present in a configuration if and only if the label of the membrane corresponds, at that time, to an occluded conformation of the pump. Note the appearance of the symbols AT P and P in both the alphabet V and the label set L; the meaning of this aspect will be explained in the sequel. In the initial configuration we assume that the multiset inside the region consists of n sodium symbols, m potassium symbols and s molecules of AT P , that is MReg = {nN a, mK, sAT P }, the multiset in the environment is MEnv = {n′ N a, m′ K}, for some integers n, n′ , m, m′ , s ≥ 0, while MBilayer is empty. ′ ′ We denote by RN a = nn and RK = m m the ratios of occurrences of sodium and potassium ions, respectively, outside and inside the membrane. These values are used to describe the starting time for the functioning of the pump. Indeed, in real cells it is known that the cytoplasmic concentration of sodium is very lower with respect to the external concentration, while the opposite holds for potassium concentration. For instance, a concentration of 145mM of sodium and 4mM of potassium can be found outside the cell, while 12mM of sodium and 139mM of potassium can be found inside. Whenever such natural conditions vary, e.g. when the intracellular concentration of Na+ or extracellular concentration of K+ rise above the steady-state values, the Na–K pumps in the plasma membrane try to re-establish the right physiological conditions. Hence, we assume that the activation of the pump is triggered by a change in the values of the ratios evaluated at the current step. Specifically, we define two threshold conditions, RN a ≤ k1 and RK ≥ k2 (for some fixed threshold values k1 , k2 ∈ R, corresponding to the ratios at steady-state concentrations), such that the pump will not be activated if they are not satisfied. Otherwise, the pump starts its functioning. In the model of the pump, a generic evolution rule has the form C
′ ′ ′ , |l′ MReg MEnv [MBilayer |l MReg −→ MEnv [MBilayer
where C is a (possibly null) threshold condition associated to the ′ ′ ′ rule, l, l′ ∈ L and MEnv , MBilayer , MReg are multisets obtained from MEnv , MBilayer , MReg by the application of the basic operations. The modification of objects happens only for symbols AT P and, in a different way, for N a and K. In the cell, the Na-K pump is autophosphorylated by the hydrolysis of one molecule of ATP, which produces one molecule of ADP (released free in the cell) and one inorganic phosphate group (covalently bounded to the pump). Hence, the transformation of the object AT P will involve the use of membrane labels that, as said before, correspond to conformations of the pump. On the other side, the objects N a, K only change their status (that is, occluded or not) during the functioning of the pump, thus we will only allow to transform each N a (K, respectively)
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
289
into its corresponding occluded symbol N a (K, respectively) and vice-versa on condition that, in that step, the current membrane label belongs to Locc . Precisely, we can have the following two situations: [x|l → [y|l′
when l ∈ L, l′ ∈ Locc and, if x ∈ {N a}+ then y ∈ {N a}+ (in this case, l′ = E1P ), if x ∈ {K}+ then y ∈ {K}+ (in this case, l′ = E2 ), or [x|l′ → [y|l
when l ∈ L, l′ ∈ Locc and, if x ∈ {N a}+ then y ∈ {N a}+ (in this case, l′ = E1P ), if x ∈ {K}+ then y ∈ {K}+ (in this case, l′ = E2 ). We also define two new types of evolution rules, which are needed only for the communication of objects, but not for their modification. (1) A binding rule has the form bout,within : x [|l → [x′ |l′ or bin,within : [|l x → [x′ |l′
for some x, x′ ∈ V + and l, l′ ∈ L (both not necessarily distinct). The application of a binding rule of the type bout,within (bin,within ) causes the movement of a multiset x from the environment (region) into the bilayer. (2) An unbinding rule has the form uwithin,in : [x|l → [|l′ x′ or uwithin,out : [x|l → x′ [|l′
for some x, x′ ∈ V + and l, l′ ∈ L (both not necessarily distinct). The application of an unbinding rule of the type uwithin,in (uwithin,out ) causes the movement of a multiset x from the bilayer into the region (environment). The communication of objects happens when an unbinding rule is used after a binding rule (not necessarily in consecutive steps). For instance, if we first use a rule of the type bout,within and then we use a rule of the type uwithin,in , then we have the passage of some objects from the outer environment into the internal region, while using first bin,within and then uwithin,out causes the passage of some objects from the internal region to the outer environment. In contrast to the usual and direct communication with target indication in P systems, here the passage of objects happens by means of the interplay of two rules and this corresponds to the presence of an intermediate region. Remark 5.2. We stress here the fact that this kind of communication could be defined in another analogous way, namely using classical evolution rules with a new target indication of the type within, which would cause an object to pass from the environment or from the internal region directly into the bilayer. Anyway, this kind of mechanism would not be enough to model the Na–K pump, since in this case it is important to consider also the current label of the membrane and let the rule (possibly) modify it too. Indeed, in this system the membrane plays a fundamental role, since it represents (a
290
5. MODELLING POWER OF MEMBRANE SYSTEMS
part of ) the cellular pump we are modelling and not only a separator for different regions. Given all the necessary definitions, the functioning of the Na–K pump with occluded states can be now described by means of the following rules: r1 r2 r3 r4 r5 r6 r7 r8
: : : : : : : :
(RN a ≤k1 )∧(RK ≥k2 )
[ |E1 ·AT P 3N a −→ [3N a|E1 ·AT P [3N a|E1 ·AT P −→ [3N a|E P ADP 1 [3N a|E P −→ N a [2N a|E P 2 1 [2N a|E P −→ 2N a [ |E P 2 2 2K [ |E P −→ [2K|E P 2 2 [2K|E P −→ [2K|E2 P 2 [2K|E2 AT P −→ [2K|E1 ·AT P [2K|E1 ·AT P −→ [ |E1 ·AT P 2K
The application and meaning of rules is as follows. If threshold conditions in rule r1 are both satisfied, the pump is in conformation E1 · AT P and (at least) three N a symbols are present inside the internal region, then the pump is activated and three sodium ions are bound to the bilayer. Note that they are still not occluded within the bilayer, since the current membrane label is not in the alphabet Locc . Rule r2 corresponds to the autophosphorylation of the pump: AT P is transformed into ADP with the (mute) production of one copy of the object P . Accordingly, the conformation of the pump is changed from E1 · AT P into the phosphorylated form E1P . As mentioned above, the object P now becomes part of the membrane label, hence it undergoes a “structural modification” by passing from being an element of the alphabet V to being a component of the membrane labels in the set L. We believe that, instead of considering P as a free object, it is more appropriate to use the chosen formal description (rather than using, instead of rule r2 , the couple of rules [3N a|E1 ·AT P −→ [3N a|E1 ADP P , and then [3N a|E1 P −→ [3N a|E P ) since, 1 actually, the phosphate directly intervene in the structural conformation of the pump (which is formally described here by means of membrane labels). The right-hand side of rule r2 denotes the occlusion of sodium ions, that is possible because the membrane label is in Locc . In the system configuration [3N a|E P , rule r3 can be applied: the con1 formation of the pump changes from E1P to E2P , the sodium ions becomes de-occluded and are exposed to the extracellular side of the protein, where one of them is immediately released free in the environment. This is exactly an unbinding rule of the form uwithin,out which, applied after the binding rule r1 (of the form bin,within ) allows the communication of objects from the region to the environment. Rule r4 describes the unbinding of the remaining two sodium ions.
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
291
When the system configuration is [|E P , no objects are present in the 2 bilayer and at least two copies of the object K are present in the environment, then rule r5 can be applied: two potassium ions are bound within the bilayer and the pump conformation remains unchanged. By releasing into the region the phosphate P attached to the pump (expressed with the label E2P ), the conformation turns to the occluded state E2 and the objects K are transformed into their corresponding occluded objects K (rule r6 ). De-occlusion and successive release of K+ ions is catalyzed by the binding of ATP to the pump. Hence, if at least one AT P symbol is present inside the region at this current step, its binding to the pump causes the membrane label to change from E2 to E1 · AT P , and the occluded K symbols to pass to a non-occluded state (rule r7 ). Note that, similarly to the structural modification of the symbol P , in rule r7 there is a passage of the symbol AT P from being a component of MReg to being part of the membrane label. Finally, by applying the unbinding rule r8 , two K symbols are released inside the region. An activation cycle of the pump is thus finished. The pump is in conformation E1 · AT P , the bilayer is empty and the simulation of pump activity can start again from rule r1 . The thresholds conditions must be evaluated again according to the current multisets and subsequent activation cycles might occur. 2.3. Sevilla Carpets and Pumps Systems. Sevilla carpets are introduced in [CDHMT] as a generalization of the control word of a Chomsky grammar, in order to describe computations in a P system. A Sevilla carpet is a table considering time on its horizontal axis, and the rules of a P system along its vertical axis; for each rule, this table contains a certain information at each computation step. Membrane Rule 1 2 3 4 5 6 7 8 Env: 136Na, 10K Reg:21Na,130K,9ATP
Iteration 1 10000000 01000000 00100000 00010000 00001000 00000100 00000010 00000001 139Na, 8K 18Na,132K,8ATP
Iteration 2 10000000 01000000 00100000 00010000 00001000 00000100 00000010 00000001 142Na, 6K 15Na,134K,7ATP
Iteration 3 10000000 01000000 00100000 00010000 00001000 00000100 00000010 00000001 145Na, 4K 12Na,136K,6ATP
We provide the Sevilla carpet of the Na–K pump where, at each computation step, it is specified whether a certain rule is used or not. We consider an initial configuration given by the following input values: 136 Na+ ions and 10 K+ ions in the environment, together with 21 Na+ ions, 130 K+ ions and 9 ATP energy units in the cytoplasmic region. These values correspond to the external and internal concentrations. As it was mentioned, the Na–K
292
5. MODELLING POWER OF MEMBRANE SYSTEMS
pump is activated in order to re-establish the physiological steady-state values, namely a concentration of 145mM of sodium and 4mM of potassium outside the cell, together with 12mM of sodium and 139mM of potassium inside. We have a carpet with eight rules, and three iterations. It is easy to note that the pump has a deterministic behaviour provided by a clear sequence of rules, each rule being triggered by the successful execution of the previous one. Therefore the pump follows the same sequence of rules in each iteration, exhibiting a sequential behaviour. Since we are in the framework of the bio-inspired approach of the membrane computing, and since parallelism is an important feature, it is normal to wonder where is this parallelism when we discuss about the biological membranes. And how this parallelism is actually activated and controlled. Pumps Na-K pumps Ca pumps Glucose-Na pumps other pumps Env Reg
Activation 3 activations 0 activations 0 activations 0 activations 136Na, 10K 21Na, 130K, 9ATP
Distribution 0010100001 000000 0000000 00000000000 145Na, 4K 12Na, 136K, 6ATP
Considering a hierarchical organization and description of a biological cell, we can identify the various pumps as the processors able to execute the rules of a membrane in a maximal parallel and nondeterministic manner (see also [C5]), where the activation of the pumps is triggered by concentration gradients conditions (we have defined specific threshold conditions corresponding to the ratios at steady-state concentrations). A final result is obtained when a stable state is reached. For the previous example, considering a certain number of Na–K pumps (say 10), as well as other pumps of a cell, the Sevilla carpet corresponding to the high-level computation of a general P system where pumps are seen as primitive constructions is given below. Here we emphasize only the communication of Na+ and K+ ions, as well as the consumption of ATP molecules (see the last lines of the table). It is easy to note the parallel execution of three pumps competing in this case for Na+ and K+ ions, as well as the nondeterministic choice of their activations represented in the Sevilla carpet by the distribution 0010100001 saying that the Na-K pumps 3, 5 and 10 of the given 10 pumps were activated. We stress the fact that this is just an attempt to consider a cell as a P system working with pumps as processors of the rules. Undoubtedly, it is far from being a real description, since in such a case many biological factors should be considered as well. A strong and well motivated hint for future research is thus established. Related and Future Work. We briefly present here some possible extensions of the model and future investigations. In [CCT], the Na–K pump was described by using the process algebra π-calculus. In [C4], the transfer mechanisms were described step by step, and software tools of verification
2. MEMBRANE DESCRIPTION OF THE SODIUM-POTASSIUM PUMP
293
were also applied. This means that it would be possible to verify properties of the described systems by using a computer program, and the use of the verification software as a substitute for expensive lab experiments. A similar development for P systems would be a very useful achievement. The P system proposed here presents similar features with P automata (see [32]), where the membranes are only allowed to communicate with each other, and objects are never modified during a computation, but only exchanged among regions, or consumed from the environment through the skin membrane. As a future extension of our work, the model of the sodiumpotassium pump can be translated into its corresponding P automaton (with the appropriate type of objects communication), and then its computational power could be investigate. In this way, we think we could establish a deeper theoretical link between the theory of formal languages and the (present description of a) biological transmembrane protein. From the biological point of view, it is known that the drug ouabain (and other similar cardiac glycosides) is a specific inhibitor of the pump; it competes with K+ ions for the same binding site (in conformation E2P ) on the extracellular side of the pump [2]. Many animal cells swell, and often burst, when they are treated with ouabain. The occurrence of ouabain can be modelled by adding a new object o to the alphabet V , and by considering the rule o [ |E P −→ [o|E P , 2
2
which could be (nondeterministically) chosen instead of the rule r5 previously defined. The functioning of the pump would then be blocked, since no other rule can be further applied. It would be worthwhile to study, from a biological perspective, the consequences of pump inhibition, also due to changes in intracellular pH (via the exchange system of sodium and hydrogen) or calcium (via the exchange system of sodium and calcium), and the dynamics governing the interactions among the pump and other proteins. Considering different behaviors of the pump, in presence of specific chemicals, could open interesting scenarios of research. Finally, it would be interesting to extend P systems with some stochastic features able to characterize the molecular interactions involving the dynamic efficiency of the pump and other quantitative aspects (e.g., kinetics rates, energy, pump failures). Regarding the sodium-potassium pump, the whole transport process can have failures, and the pump can fail to transport Na+ out in exchange for K+ that are taken in. For example, due to the lower rate constant for the reaction E1 · AT P ⇔ E1 ∼ P + ADP , E1 has enough time to bind sodium before undergoing the transition to E2 . However, the reaction E1 · AT P ⇔ E1 ∼ P + ADP can work sometime before the sodium ions bind to the pump; this occurs quite rarely compared to the usual activity of the pump. Mainly, ATP is working faster than the sodium ions, and the pump changes its conformation (from open inside
294
5. MODELLING POWER OF MEMBRANE SYSTEMS
to open outside) without the sodium ions. This simple biological example motivates the study of stochastic aspects related to the P system proposed here. Therefore, in order to have a more realistic description of the pump, we could give a probabilistic model and add some probability distributions to rules, in a similar way they were attached to the pump actions in [CCT]. In this way, it could also be possible to model the quantitative behavior of a P system. 3. Distributed Evolutionary Algorithms Inspired by Membranes We present an analysis of the similarities between distributed evolutionary algorithms and membrane systems. The correspondences between evolutionary operators and evolution rules and between communication topologies and policies in distributed evolutionary algorithms and membrane structures and communication rules in membrane systems are identified. As a result of this analysis we propose new strategies of applying the operators in evolutionary algorithms and new variants of distributed evolutionary algorithms. The behaviour of these variants is numerically tested for some continuous optimization problems. Evolutionary algorithms are reliable methods in solving hard problems in the field of discrete and continuous optimization. They are approximation algorithms which achieve a trade-off between solution quality and computational costs. Despite the large variety of evolutionary algorithms (genetic algorithms, evolution strategies, genetic programming, evolutionary programming), all of them are based on the same idea: evolve a population of candidate solutions by applying some rules inspired by biological evolution: recombination (crossover), mutation, and selection [38]. An evolutionary algorithm acting on only one population is similar to a one-membrane system. Distributed evolutionary algorithms, which evolve separate but communicating (sub)populations, are more like membrane systems. How similar are membrane computing and distributed evolutionary computing? Can ideas from membrane computing improve the evolutionary algorithms, or vice-versa? An attempt to build a bridge between membrane computing and evolutionary computing is given in [83], where a membrane algorithm is developed by using a membrane structure together with ideas from genetic algorithms (crossover and mutation operators) and from metaheuristics for local search (tabu search). We go further and deeper, and analyze the relationship between different membranes structures and different communication topologies specific to distributed evolutionary algorithms. Moreover, we propose new strategies for applying evolutionary rules and new variants of distributed evolutionary algorithms inspired by membrane systems structure and functioning. We analyze first the correspondence between evolutionary operators and evolution rules. As a result of this analysis, we propose a more flexible strategy of applying the evolutionary operators. Similarities between membrane
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 295
structures and communication topologies are presented, as well as the similarities between communication rules in membrane systems and communication policies in distributed evolutionary algorithms. As a result of this analysis is proposed a new variant of distributed evolutionary algorithms. Finally, a numerical analysis of the new variants of evolutionary algorithms is presented. 3.1. Evolutionary Operators and Evolution Rules . Evolutionary algorithms (EAs) working on only one population (panmictic EAs) can be interpreted as particular membrane systems having only one membrane. Inside this single membrane there is a population of candidate solutions for the problem to be solved. Usually a population is an m-uple of ndimensional vectors: P = x1 . . . xm , xi = (x1i , . . . , xni ) ∈ D, where D is a discrete or a continuous domain depending on the problem to be solved. The evolutionary process consists in applying the recombination, mutation and selection operators to the current population in order to obtain a new population. Recombination (crossover): The aim of this operator is to generate new elements from a set of elements (called parents) selected from the current population. Thus we have a mapping R : Dr → Dq , where usually q ≤ r. Typical examples of recombination operators are: r = q = 2, R((u1 , . . . , un ), (v 1 , . . . , v n )) = (30)
((u1 , . . . , uk , v k+1 , . . . , v n ), (v 1 , . . . , v k , uk+1 , . . . , un ))
and r
(31)
r arbitrary , q = 1,
R(xi1 , . . . , xir ) =
1X xij r j=1
The first type of recombination corresponds to one point crossover (where k ∈ {1, . . . , n − 1} is an arbitrary cut point) used in genetic algorithms, while the second example corresponds to intermediate recombination used in evolution strategies [38]. Mutation: The aim of this operator is to generate a new element by perturbing one element from the current population. This can be modelled by a mapping M : D → D defined by M((u1 , . . . , un )) = (v 1 , . . . , v n ). Typical examples are: (32) 1 − ui for probability p and v i = ui + N (0, σ i ), i = 1, n vi = ui with probability 1 − p The first example is used in genetic algorithms based on a binary coding (ui ∈ {0, 1}), while the second one is typical for evolution strategies. N (0, σ i ) denotes a random variable with normal distribution, of zero mean and standard deviation σ i .
296
5. MODELLING POWER OF MEMBRANE SYSTEMS
Selection: It is used to construct a set of elements starting from the current population such that the best elements with respect to the objective function of the optimization problem to be solved are favoured. It does not generate new configurations, but only sets of existing (not necessarily distinct) configurations. Thus it maps Dq to Dr and can be used in two main situations: selection of parents for recombination (in this case q = m, r < m, and the parents selection is not necessarily based on the quality of elements), and selection of survivors (in this case q ≥ m, r = m, and the survivors are stochastically or deterministically selected by taking into account their quality with respect to the optimization problem). In the following, the mapping corresponding to parents selection is denoted by Sp and the mapping corresponding to survivors selection is denoted by Ss . A particular evolutionary algorithm is obtained by combining these evolutionary operators and by applying them iteratively to a population. Typical ways of combining the evolutionary operators lead to the main evolutionary strategies: generational, and steady state. In the generational (synchronous) strategy, at each step a population of new elements is generated by applying recombination and mutation. The population of the next generation is obtained by applying selection to the population of new elements or to the joined population of parents and offsprings. The general structure of a generational EA is presented in Algorithm 4 where X(t) denotes the population corresponding to generation t, z denotes an offspring and Z denotes the population of offsprings. The symbol ∪+ denotes an extended union, which allows multiple copies of the same element (as in multisets). The mapping M is the extension of M to Dq , i.e. M(xi1 , . . . , xiq ) = (M(xi1 ), . . . , M(xiq )). Algorithm 4 Generational Evolutionary Algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:
Random initialization of population X(0) t := 0 repeat Z := ∅ for all i ∈ {1, . . . , m} do Z := Z ∪+ (M ◦ R ◦ Sp )(X(t)) end for X(t + 1) := Ss (X(t) ∪+ Z) t := t + 1 until a stopping condition is satisfied
In the steady state (asynchronous) strategy, at each step a new element is generated by recombination and mutation, and assimilated into the population if it is good enough (e.g. better than one of its parents, or than the worst element in the population). More details are in Algorithm 5. The simplest way to interpret a generational or a steady state evolutionary algorithm as a membrane system is to consider the entire population as a structured object in a membrane, and the compound operator applied as one
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 297
evolution rule which includes recombination, mutation and selection. Such an approach represents a rough and coarse handling which does not offer flexibility. A more flexible approach would be to consider each evolutionary operator as an evolution rule. Algorithm 5 Steady State Evolutionary Algorithm 1: 2: 3: 4: 5: 6: 7:
Random initialization of population X(0) t := 0 repeat z := (M ◦ R ◦ Sp )(X(t)) X(t + 1) := Ss (X(t) ∪+ z) t := t + 1 until a stopping condition is satisfied
Evolutionary operators are usually applied in an ordered manner (as in Algorithms 4 and 5): first parents selection, then recombination and mutation, and finally survivors selection. Starting from the way the evolution rules are applied in a membrane system, we consider that the rules can be independently applied to the population elements, meaning that no predefined order between the operators is imposed. At each step any operator can be applied, up to some restrictions ensuring the existence of the population. The recombination and mutation operators R and M can be of any type, with possible restrictions imposed by the coding of population elements. By applying these operators, new elements are created. These elements are unconditionally added to the population. Therefore by applying the recombination and mutation operators, the population size is increased. When the population size reaches an upper limit (e.g. twice the initial size of the population), then the operators R and M are inhibited. The role of selection is to modify the distribution of elements in the population by eliminating or by cloning some elements. Simple selection operators could be defined by eliminating the worst element of the population, or by cloning the best element of the population. When selection is applied by cloning, then the population size is increased and selection is inhibited whenever the size reaches a given upper bound. On the other hand, when selection is applied by eliminating the worst element, the population size is reduced, and selection is inhibited whenever the size reaches a given lower bound (e.g. half of the initial size of the population). By denoting with x1 . . . xm (xi ∈ D) the entire population, with xi1 . . . xiq an arbitrary part of the population, with x∗ the best element and with x− the worst element, the evolutionary operators can be described more in the spirit of evolution rules from membrane systems as follows: : Rule 1 (recombination): xi1 . . . xir → xi1 . . . xir x′i1 . . . x′iq where (xi1 , . . . , xir ) = Sp (x1 , . . . , xm ) is the set of parents defined by the
298
5. MODELLING POWER OF MEMBRANE SYSTEMS
: :
: :
selection operator Sp , and (x′i1 , . . . , x′iq ) = R(xi1 , . . . , xir ) is the offspring set obtained by applying the recombination operator R to this set of parents; Rule 2 (mutation): xi → xi x′i where x′i = M(xi ) is the perturbed element obtained by applying the mutation operator M to xi ; Rule 3a (selection by deletion): x− → λ, meaning that the worst element (with respect to the objective function) is eliminated from the population; Rule 3b (selection by cloning): x∗ → x∗ x∗ meaning that the best element (with respect to the objective function) is duplicated. Rule 4 (insertion of random elements): x → xξ, where ξ ∈ D is a randomly generated element and x is an arbitrary element of the population.
The last rule does not correspond to the classical evolutionary operators but is used in evolutionary algorithms in order to stimulate the population diversity. By following the spirit of membrane computing, these rules should be applied in a fully parallel manner. However, in order to avoid going too far from the classical way of applying the operators in evolutionary algorithms, we consider a sequential application of rules. Thus we obtain an intermediate strategy: the evolutionary operators are applied sequentially, but in an arbitrary order. Such a strategy, based on a probabilistic decision concerning the operator to be applied at each step, is described in Algorithm 6. The rules involved in the evolutionary process are: recombination, mutation, selection by deletion, selection by cloning and random elements insertion. The probabilities corresponding to these rules are pR , pM , pSd , pSc and pI ∈ [0, 1]. By applying the evolutionary operators in such a probabilistic way, we obtain a flexible algorithm which works with variable size populations. In Algorithm 6 the population size corresponding to iteration t is denoted by m(t). Even if variable, the population size is limited by a lower bound, m∗ , and an upper bound, m∗ . An important feature of Algorithm 6 is given by the fact that only one operator is applied at each step, and thus it can be considered as an operator oriented approach. This means that first an operator is probabilistically selected and only afterwards are selected the elements on which it is applied. Another approach would be that oriented toward elements, meaning that at each step all elements can be involved in a transformation and for each one is selected (also probabilistically) the rule to be applied. After such a parallel step, a mechanism of regulating the population size can be triggered. If the population became too small, then selection by cloning can be applied or some random elements could be inserted. If the population became too large, then selection by deletion could be applied. This strategy is characterized through a parallel application of rules, thus it is more in the spirit of membrane computing. However, this strategy did not provide better
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 299
results than Algorithm 6 when it was tested for continuous optimization problems. Algorithm 6 Evolutionary Algorithm with Random Selection of Operators 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21:
Random initialization of the population X(0) = x1 (0) . . . xm(0) (0) t := 0 repeat generate a uniform random value u ∈ (0, 1) if (u < pR ) ∧ (m(t) < m∗ ) then apply Rule 1 (recombination) end if if (u ∈ [pR , pR + pM )) ∧ (m(t) < m∗ ) then apply Rule 2 (mutation) end if if (u ∈ [pR + pM , pR + pM + pSd )) ∧ (m(t) > m∗ ) then apply Rule 3a (selection by deletion) end if if (u ∈ [pR + pM + pSd , pR + pM + pSd + pSc )) ∧ (m(t) < m∗ ) then apply Rule 3b (selection by cloning) end if if (u ∈ [pR + pM + pSd + pSc , 1]) ∧ (m(t) < m∗ ) then apply Rule 4 (insertion of a random element) end if t := t + 1 until a stopping condition is satisfied
We can expect that the behaviour of such algorithms be different from the behaviour of more classical generational and steady state algorithms. However, from a theoretical viewpoint, such an algorithm can be still modelled by a Markov chain and the convergence results still hold. This means that if we use a mutation operator based on a stochastic perturbation described by a distribution having a support which covers the domain D (e.g. normal distribution) and an elitist selection (the best element found during the search is not eliminated from the population), then the best element of the population converges in probability to the optimum. The difference appears with respect to the finite time behaviour of the algorithm, namely the ability to approximate (within a certain desired precision) the optimum in a finite number of steps. Preliminary tests suggest that for some optimization problems, the strategy with random selection of operators works better than the generational and steady state strategies. This means that using ideas from the application of evolution rules in membrane systems, we can obtain new evolutionary strategies with different dynamics.
300
5. MODELLING POWER OF MEMBRANE SYSTEMS
3.2. Communication Topologies and Policies. As it has been already presented, a one-population evolutionary algorithm can be mapped into a one-membrane system with rules associated to the evolutionary operators. Closer to membrane computing are the distributed evolutionary algorithms which work with multiple (sub)populations. In each subpopulation the same or different evolutionary operators can be applied leading to homogeneous or heterogeneous distributed EAs, respectively. Introducing a structure over the population has different motivations [102]: (i) it achieves a good balance between exploration and exploitation in the evolutionary process in order to prevent premature convergence (convergence to local optima) in the case of global optimization problems; (ii) it stimulates the population diversity in order to deal with multimodal optimization problems or with dynamic optimization problems; (iii) it is more suitable to parallel implementation. Therefore, besides the possibility of improving the efficiency by parallel implementation, structuring the population in communicating subpopulations allows developing new search mechanisms which behave differently than their serial counterparts [102]. The multi-population model of the evolutionary algorithms, also called island-model, is based on the idea of dividing the population in some communicating subpopulations. In each subpopulation is applied an evolutionary algorithm for a given number of generations, then a migration process is started. During the migration process some elements can change their subpopulations, or clones of some elements can replace elements belonging to other subpopulations. The main elements which influence the behaviour of a multi-population evolutionary algorithm are the communication topology and the communication policy. The communication topology specifies which subpopulations are allowed to communicate while the communication policy describes how is ensured the communication. The communication topology in a distributed evolutionary algorithm plays a similar role as the membranes structure plays in a membrane system. On the other hand the communication policy in distributed evolutionary algorithms is related to the communication rules in membrane systems. 3.3. Communication Topologies and Membrane Structures. The communication topology describes the connections between subpopulations. It can be modelled by a graph having nodes corresponding to subpopulations, and edges linking subpopulations which communicate in a direct manner. According to [1], typical examples of communication topologies are: fully connected topology (each subpopulation can communicate with any other subpopulation), linear or ring topology (only neighbour subpopulations can communicate), star topology (all subpopulations communicate through a kernel subpopulation). More specialized communication topologies are hierarchical topologies [49], and hypercube topologies [47]. The fully connected, star and linear topology can be easily described by using
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 301
hierarchical membrane structures which allows transferring elements either in a inner or in the outer membrane (see Figure 8). Fully Connected Topology. Let us consider a number of s fully connected subpopulations. The fully connected topology can be modelled by using s+1 membranes, namely s elementary membranes and one skin membrane containing them (see Figure 8(a)). The elementary membranes correspond to the given s subpopulations, and they contain both evolution rules and communication rules. The skin membrane plays only the role of communication environment, thus it contains only communication rules and the objects which have been transferred from the inner membranes. The transfer of an element between two inner membranes is based on two steps: the transfer of the element from the source membrane to the skin membrane and the transfer of the element from the skin membrane to the target membrane. Another structure which corresponds to a fully connected topology is that associated to tissue P systems. 2 1
2 1
1 3
4
1
3
2
2
3
4
2
3 4
3
4
(a)
4
1
0
(b)
3 2
4
1
(c)
Figure 8. Communication topologies in distributed evolutionary algorithms and their corresponding membranes structures. (a) Fully Connected Topology; (b) Star Topology; (c) Linear Topology. Star Topology. The membrane structure corresponding to a star topology with s subpopulations is given by one skin membrane corresponding to the kernel subpopulation, and s − 1 elementary membranes corresponding to the other subpopulations (see Figure 8(b)). The main difference from the previous structure associated to a fully connected topology is that the skin membrane has not only the role of an environment for communication, but it can contain also evolution rules. Linear Topology. In this case a subpopulation p can communicate only with its neighbour subpopulations p + 1 and p − 1. The corresponding structure is given by nested membranes, each membrane corresponding to a subpopulation (see Figure 8(c)).
302
5. MODELLING POWER OF MEMBRANE SYSTEMS
Different situations appear in the case of ring and other topologies [47] which are associated with cyclic graph structures. In these situations the corresponding membrane structure is given by a net of membranes, or tissue P systems. 3.4. Communication Policies and Communication Rules. A communication policy refers to the way the communication is initiated, the way the migrants are selected, and the way the immigrants are incorporated into the target subpopulation. The communication can be initiated in a synchronous way after a given number of generations, or in an asynchronous way when a given event occurs. The classical variants of migrants selection are random selection and selection based on the fitness value (best elements migrate and the immigrants replace the worst elements of the target subpopulation). The communication policies are similar to communication rules in membrane computing, meaning that all communication steps can be described by some typical communication rules in membrane systems. There are two main variants for transferring elements between subpopulations: (i) by sending a clone of an element from the source subpopulation to the target subpopulation (pollination); (ii) by moving an element from the source subpopulation to the target one (plain migration). An element is selected with a given probability, pm , usually called migration probability. If the subpopulations size should be kept constant, then in the pollination case for each new incorporated element, another element (e.g. a random one, or the worst one) is deleted. In the case of plain migration a replacing element (usually randomly selected) is sent from the target subpopulation to the source one. In order to describe a random pollination process between s subpopulations by using communication rules specific to a membrane system, we consider the membrane structure described in Figure 8(a). Each elementary membrane corresponds to a subpopulation, and besides the objects corresponding to the elements in the subpopulation, it also contain some objects which are used for communication. These objects, denoted by rid , are identifiers of the regions with which the subpopulation corresponding to the current region can communicate (in a fully connected topology of s subpopulations the identifiers belong to {1, . . . , s}). On the other hand, when the migration step is initiated, a given number of copies of a migration symbol η are created into each elementary membrane. The multiplicity of η is related with the migration probability pm (e.g. it is ⌊mpm ⌋, where m is the size of subpopulation in the current region). Possible communication rules, for each type of membrane, describing the pollination process are presented in the following: Elementary membranes. Let us consider the membrane corresponding to a subpopulation Si (i 6= 0). There are two types of rules: an exporting rule ensuring the transfer of an element to the skin membrane which plays the role of an communication environment, and an assimilation rule ensuring, if
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 303
it is necessary, that the subpopulation size is kept constant. The export rule can be described as: (33)
Si Si Si d, out) → (xSi , here)(xSi rid : ηxSi rid Rexport
The assimilation rule can be described as: (34)
Si Rass : dxSi → λ
xSi denotes in both rules an arbitrary element from the subpopulation Si , Si and rid identifies the region where clones of the elements from the subpopSi ulation Si can be sent. At each application of Rexport a copy of the symbol η is consumed, and a copy of a deletion symbol d is created in the skin membrane. Skin membrane. The communication rule corresponding to the skin membrane is: (35)
Si R0 : dxSi rid → (dxSi , inid )
In the case of plain random migration, any element xSi from a source subpopulation Si can be exchanged with an element xSj from a target subpopulation Sj . Such a communication process is similar with that in tissue P systems described as (i, xSi /xSj , j). Other communication policies (e.g. those based on elitist selection or replacement) can be similarly described. 3.5. Distributed Evolutionary Algorithms Inspired by Membrane Systems. A first communication strategy inspired by membrane systems is that used in the membrane algorithm proposed in [83]. The membrane algorithm proposed in [83] can be interpreted as a hybrid distributed evolutionary algorithm based on a linear topology (Figure 8c) and a tabu search heuristic. The basic idea of communication between membranes is that of moving the best element in the inner membrane and the worst one in the outer membrane. The skin membrane receives random elements from the environment. The general structure of such an algorithm in presented in Algorithm 7. Another communication topology, corresponding to a simple membrane structure but not very common in distributed evolutionary computing is that of star type (Figure 8b). In the following we propose a hybrid distributed evolutionary algorithm based on this topology. Let us consider a membrane structure consisting of a skin membrane containing s − 1 elementary membranes. Each elementary membrane i contains a subpopulation Si on which an evolutionary algorithm is applied. This evolutionary algorithm can be based on a random application of rules. The skin membrane contains also a subpopulation of elements, but different transformation rules are applied here (e.g. local search rules instead of evolutionary operators). The communication is only between S1 (corresponding to skin membrane) and the other subpopulations. The algorithm consists of two stages: an evolutionary
304
5. MODELLING POWER OF MEMBRANE SYSTEMS
Algorithm 7 Distributed Evolutionary Algorithm Based on Linear Topology 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:
for all i ∈ {1, . . . , s} do Random initialization of the subpopulation Si end for repeat for all i ∈ {1, . . . , s} do Apply an EA to Si for τ steps end for Apply local search to the best element in S1 for all i ∈ {1, . . . , s − 1} do send a clone of the best element from Si to Si+1 end for for all i ∈ {2, . . . , s} do send a clone of the worst element from Si to Si−1 end for Add a random element to S1 for all i ∈ {1, . . . , s} do Delete the two worst elements of Si end for until a stopping condition is satisfied
one and a communication one which are repeatedly applied until a stopping condition is satisfied. The general structure is described in Algorithm 8. The evolutionary stage consists in applying an evolutionary algorithm on each of the subpopulations in inner membranes for τ iterations. The evolutionary stage is applied in parallel to all subpopulations. The subpopulations in inner membranes are initialized only at the beginning, thus the next evolutionary stage starts from the current state of the population. In this stage the only transformation of the population in the skin membrane consists in applying a local search procedure to the best element of the population. The communication stage consists in sending clones of the best element from the inner membranes to the skin membrane by applying the rule x∗ → (x∗ , here)(x∗ , out) in each elementary membrane. Moreover, the worst elements from the inner membranes are replaced with randomly selected elements from the skin membrane. If the subpopulation S1 should have more than s elements, then at each communication stage some randomly generated elements are added. The effect of such a communication strategy is that the worst elements in inner membranes are replaced with the best elements from other membranes or with randomly generated elements. In order to ensure the elitist character of the algorithm, the best element from the skin membrane is conserved at each step. It represents the approximation of the optimum we are looking for.
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 305
Algorithm 8 Distributed Evolutionary Algorithm Based on a Star Topology 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
for all i ∈ {1, . . . , s} do Random initialization of the subpopulation Si end for repeat for all i ∈ {2, . . . , s} do Apply an EA to Si for τ steps end for Apply local search to the best element in S1 Reset subpopulation S1 (all elements in S1 excepting for the best one are deleted) for all i ∈ {2, . . . , s} do send a clone of the best element from Si to S1 end for add random elements to S1 (if its size should be larger than s) for all i ∈ {2, . . . , s} do Replace the worst element of Si with a copy of a randomly selected element from S1 end for until a stopping condition is satisfied
3.6. Numerical Results . The aim of the experimental analysis was twofold: (i) to compare the behaviour of the evolutionary algorithms with random application of operators (Algorithm 6) and of those based on generational and steady-state strategies (Algorithms 4 and 5); (ii) to compare the behaviour of distributed evolutionary algorithms based on linear and star topologies (Algorithm 7 and Algorithm 8) with that of an algorithm based on a fully connected topology and random migration. The particularities of the evolutionary algorithm applied in each subpopulation and the values of the control parameters used in the numerical experiments are presented in the following. Evolutionary operators. The generation of offsprings is based on only one variation operator inspired from differential evolution algorithms. It combines the recombination and mutation operators, so an offspring zi = R(xi , x∗ , xr1 , xr2 , xr3 ) is obtained by (36) γxj∗ + (1 − γ)(xjr1 − xj∗ ) + F · (xjr2 − xjr3 )N (0, 1), for probability p j zi = (1 − γ)xji + γU (aj , bj ), probability 1 − p, where r1 , r2 and r3 are random values from {1, . . . , m}, x∗ is the best element of the population, F ∈ (0, 2], p ∈ (0, 1], γ ∈ [0, 1] and U (aj , bj ) is a random value, uniformly generated in domain of values for component j.
306
5. MODELLING POWER OF MEMBRANE SYSTEMS
In the generational strategy an entire population of offsprings z1 . . . zm is constructed by applying the above rule. The survivors are selected by comparing the parent xi with its offspring zi and by choosing the best one. In the sequential strategy, at each step is generated one offspring which replaces, if it is better, the worst element of the population. In the variant based on Algorithm 6 the following rules are probabilistically applied: the recombination operator given by Equation (36) is applied with probability pR , the selection by deletion is applied with probability pSd and the insertion of random elements is applied with probability pI . Since there are two variants of the recombination operator (for γ = 0 and for γ = 1) each one can be applied with a given probability: p0R and p1R . These probabilities satisfy p0R + p1R = pR . Test functions. The algorithms have been applied to some classical test functions (see Table 2) used in empirical analysis of evolutionary algorithms. All these problems are of minimization type, the optimal solution being x∗ ∈ D and the optimal value being f ∗ ∈ [−1, 1]. x∗ and f ∗ have been randomly chosen for each test problem. In all these tests the problem size was n = 30. The domains values of decision variables are D = [−100, 100]n for sphere function, D = [−32, 32]n for Ackley’s function, D = [−600, 600]n for Griewank’s function and D = [−5.12, 5.12]n for Rastrigin’s function. Table 2. Test Functions Name
Expression
Sphere
f1 (x) = f ∗ +
Ackley
∗
n X
(xi − x∗i )2 i=1 r Pn
f2 (x) = f − 20 exp −0.2 − exp
1 n
n !
cos(2π(xi − x∗i ))
i=1 n X
− x∗i )2
!
+ 20 + e
n Y √ 1 cos((xi − x∗i )/ i) + 1 (xi − x∗i )2 − 4000 i=1 i=1 n X ∗ 2 ∗ ((xi − xi ) − 10 cos(2π(xi − x∗i ))) + 10 f4 (x) = f +
Griewank f3 (x) = f ∗ + Rastrigin
n X
i=1 (xi
i=1
Parameters of the algorithms. The parameters controlling the evolutionary algorithm are chosen as follows: m = 50 (population size), p = F = 0.5 (the control parameters involved in the recombination rule given in Equation (36)), ǫ = 10−5 (accuracy of the optimum approximation). We consider that the search process is successful whenever it finds a configuration, x∗ , for which the objective function has a value which satisfies |f ∗ − f (x∗ )| < ǫ by using less than 500000 objective functions evaluations. The ratio of successful runs from a set of independent runs (in our tests the number of independent runs of the same algorithm for different randomly
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 307
initialized populations was 30) is a measure of the effectiveness of the algorithm. As a measure of efficiency we use the number nf e of objective function evaluations, both average value and standard deviation. Results. Table 3 presents comparative results for generational, steady state and the strategy based on random selection of operators (Algorithm 6). For the first two strategies the evolutionary operator described by Equation (36) was applied for γ = 0. For γ = 1 the success ratio of generational and sequential variants is much smaller, therefore these results are not presented. The probabilities for applying the evolutionary operators in Algorithm 6 were pR = 0.5 (p0R = 0.35,p1R = 0.15), pSd = 0.5, pI = 0. The initial population size was m(0) = 50 and the lower and upper bounds were m∗ = m(0)/2 and m∗ = 2m(0) respectively. The results of Table 3 suggest that for the functions f1 , f2 , f3 the Algorithm 6 does not prove to be superior to generational and steady state strategies. However a significant improvement can be observed for function f4 which is a difficult problem for EA based on recombination as in Equation (36). However, by changing the probability p involved in the recombination operator (e.g. p = 0.2 instead of p = 0.5) good behaviour can be obtained also by generational and steady state strategies. On the other hand by dynamically adjusting the probabilities of applying the evolutionary operators the behaviour of Algorithm 6 can be improved. For instance, if one choose pR = 0, pSd = 0.1 and pI = 0.9 whenever the average variance of the population is lower than 10−8 then in the case of Ackley function the success ratio is 30/30 and hnf ei is 19312 with a standard deviation of 13862. Table 3. Comparison of Evolutionary Rules Applying Strategies in EA Test fct f1 f2 f3 f4
Generational Success hnf ei 30/30 27308±396 30/30 37803±574 30/30 29198±1588 19/30 335518±551
Steady state Success hnf ei 30/30 25223±469 30/30 35223±611 30/30 27010±1016 18/30 296111±4931
Algorithm 6 Success hnf ei 30/30 24758±1328 29/30 27461±2241 28/30 20017±1640 29/30 193741±1131
The second set of experiments aimed to compare the communication strategies inspired by membranes (Algorithm 7 and Algorithm 8) with a communication strategy characterized by a fully connected topology and a random migration of elements. In all experiments the number of subpopulations was s = 5, the initial size of each subpopulation was m(0) = 10 and the number of evolutionary steps between two communication stages was τ = 100. In the case of random migration the probability of selecting an element for migration was pm = 0.1. The results in Table 4 show that the communication strategy based on the linear topology (Algorithm 7) behaves almost similarly to the strategy based on fully connected topology and random migration. On the other hand, the
308
5. MODELLING POWER OF MEMBRANE SYSTEMS
Table 4. Behaviour of Distributed EAs Based on a Generational EA Test Fully connected topology fct. and random migration Success hnf ei f1 30/30 30500±1290 f2 30/30 41000±2362 f3 20/30 32500±2449 f4 7/30 158357±666
Linear topology (Algorithm 7) Success hnf ei 30/30 30678±897 30/30 41349±1864 30/30 33013±3355 4/30 95538±106
Star topology (Algorithm 8) Success hnf ei 30/30 34943± 4578 30/30 51230± 8485 26/30 41628±8353 24/30 225280±1326
communication strategy based on the star topology (Algorithm 8) has a different behaviour, characterized by a slower convergence. This behaviour can be explained by the higher degree of randomness induced by inserting random elements in the skin membrane. However this behaviour can be beneficial in the case of difficult problems (e.g. Rastrigin) by avoiding premature convergence situations. In the case of Rastrigin’s function the success ratio of Algorithm 8 is significantly higher than in the case of the other two variants. Table 5. Behaviour of Distributed EAs Based on Random Application of Evolutionary Operators Test Fully connected topology fct and random migration Success hnf ei f1 30/30 42033±4101 f2 30/30 117033±8077 f3 15/30 51065±14151 f4 30/30 94822±22487
Linear topology (Algorithm 7 ) Success hnf ei 30/30 59840±11089 30/30 156363±1034 17/30 72901±38654 30/30 107412±2536
Star topology (Algorithm 8 ) Success hnf ei 30/30 98280±20564 29/30 266783±9653 21/30 111227±5849 30/30 111752±1978
The results in Table 5 show that by using the Algorithm 6 in each subpopulation the convergence is significantly slower for Ackley and Griewank functions while it is significantly improved in the case of Rastrigin function, both with respect to other distributed variants and with the panmictic algorithms used in the experimental analysis. These results suggest that structuring the population as in membrane systems, and applying the evolutionary operators in an unordered manner, we obtain evolutionary algorithms with a new dynamics. This new dynamics leads to significantly better results for certain evolutionary operators and test functions (see results in Table 5 for Rastrigin’s function). However the hybrid approach is not superior to the classical generational variant combined with a random migration for the other test functions. Such a situation is not unusual in evolutionary computing, being accepted that
3. DISTRIBUTED EVOLUTIONARY ALGORITHMS INSPIRED BY MEMBRANES 309
no evolutionary algorithm is superior to all the others with respect to all problems [106]. Algorithm 8 is somewhat similar to the membrane algorithm proposed by Nishida in [83]. Both are hybrid approaches which combine evolutionary search with local search, and are based on a communication structure inspired by membrane systems. However there are some significant differences between these two approaches: (i) they use different communication topologies: linear topology in the membrane algorithm of [83] vs. star topology in Algorithm 8; therefore they use different membrane structures (see Figure 9); (ii) they address different classes of optimization problems: combinatorial optimization vs. continuous optimization; (iii) they are based on different evolutionary rules (genetic crossover and mutation in [83] vs. differential evolution recombination here), and different local search procedures (tabu search in [83] vs. NelderMead local search in the current approach); (iv) they are characterized by different granularity: micro-populations (e.g. two elements) but a medium number of membranes (e.g. 50) in [83] vs. medium sized subpopulations (e.g. 10) but a small number of membranes (e.g. 5); (v) they are characterized by different communication frequencies: transfer of elements between membranes at each step in the membrane algorithm vs. transfer of elements only after τ evolutionary steps have been executed (e.g. τ = 100).
EA
EA EA
Local Search ( Nelder Mead) +Random immigrants
(a)
Tabu Search
mGA mGA
Random immigrants
(b)
Figure 9. (a) Membrane Structure of Algorithm 8.(b) Membrane Structure of Nishida’s Approach.
The membrane community is looking for a relationship, a link between membrane systems and distributed evolutionary algorithms. We claim that the main similarity is at a conceptual level, and each important concept in distributed evolutionary computing has a correspondent in membrane computing. This correspondence is summarized in the following table:
310
5. MODELLING POWER OF MEMBRANE SYSTEMS
Membrane system Distributed Evolutionary Algorithm Membrane(region) Population Objects Individuals Evolution rules Evolutionary operators Membrane structure Communication topology Communication rules Communication policy Besides these conceptual similarities, there are some important differences: (i) membrane systems have an exact notion of computation, while evolutionary computation is an approximate one; (ii) membrane computing is based on symbolic representations, while evolutionary computing is mainly used together with numerical representations. Despite these differences, ideas from membrane computing are useful in developing new distributed meta-heuristics. A first attempt was given by the membrane algorithm proposed in [83]. However this first approach did not emphasized at all the important similarities between membrane computing and distributed evolutionary computing. This aspect motivates us to start a depth analysis of these similarities, having the aim of describing the evolutionary algorithms by using the formalism of membrane computing. As a result of this analysis, we present a non-standard strategy of applying the evolutionary operators. This strategy, characterized by an arbitrary application of evolutionary operators, proved to be behave differently than the classical generational and steady state strategies when applied for some continuous optimization problems. On the other hand, based on the relationship between membrane structures and communication topologies, we introduce a new hybrid distributed evolutionary algorithm effective in solving some continuous optimization problems. The algorithms 6 and 8 proposed and analyzed here are good and reliable in approximating solutions of optimization problems. This fact proves that by using ideas from membrane computing, new distributed metaheuristic methods can be developed. Besides this way of combining membrane and evolutionary computing there are at least two other research directions which deserve further investigation: (i) the use evolutionary algorithms to evolve membrane structures; (ii) the use of membrane systems formalism in order to understand the behaviour of distributed evolutionary algorithms. 4. Membrane Systems and Distributed Computing Membrane systems represent an appropriate model for distributed computing, an efficient and natural environment to present the fundamental distributed algorithms. They can become a primary model for distributed computing, particularly for message-passing algorithms. We present the core theory, the fundamental algorithms and problems in distributed computing.
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
311
An interesting aspect of the membrane systems is that they are distributed and parallel computing devices. The complexity of the distributed systems requires a suitable formal model. The link between membrane systems and distributed and parallel computing is not difficult to comprehend. If we associate each membrane with a host on a network, then the membrane containing a few such membranes is congruent to a subnet, and subsequently their parent membranes represent larger networks. Finally, the skin membrane could represent the World Wide Web. You can imagine that routing in Internet is similarly to message passing within a P system. Specifically, membranes within the same membrane can directly communicate with each other (by sending objects within the membrane), while if two membranes in different parent membranes need to pass objects to each other, then this scenario is congruent to sending the packet to a router which connects the two networks. Thus P systems can provide a natural and efficient representation of distributed systems. In the context of parallel and distributed systems, the algorithmic issues studied in the sequential model require a fundamental re-thinking. In parallel systems, a problem is solved by a tightly-coupled set of processors, while in distributed systems it is solved by a set of communicating (asynchronous) processes. From the viewpoint of the theory of Distributed Computing, which is restricted to the Message Passing and Shared Memory model, the P systems model provides a different perspective, which is natural and easy to relate to. In P systems the process of passing objects is similar to Message Passing; moreover, membranes in the same biological system could have access to the same DNA, or could have access to the same blood stream, making it possible to relate the model to a Shared Memory system as well. Both these extensions are simplistic and natural, and hence it is not difficult to adapt the already existing theory of Distributed Computing to the P systems model, something that this paper aims to achieve. We present the fundamentals of distributed computing, starting with algorithms for broadcast, convergecast, the leader election problem, the mutual exclusion problem in a distributed environment, and finally the consensus problem and fault tolerance. Many of the presented algorithms are used in an example describing an immune response system against virus attacks. This example is implemented using a P system library to emulate the main functions of a P system, and an MPI program that takes advantage of the highly parallel features provided by the model. 4.1. Basic Algorithms in Membrane Systems. Collecting and dispersing information is central to any system. Local information often has to be passed around in the system. This is where the message passing model becomes relevant. Two basic algorithms in any message passing model are broadcast and convergecast. Another common algorithm discussed in a message passing model is flooding. This algorithm constructs a spanning tree
312
5. MODELLING POWER OF MEMBRANE SYSTEMS
when a graph is given. However, P systems themselves essentially have a spanning tree structure. Each membrane/node of the tree has exactly one parent, except the skin membrane which represents the root. More information and notation we use are presented in [6]. Broadcast: Consider a system in which a membrane mi has to send some object to all the membranes (nodes) of the system. Clearly, we have two cases of broadcast here - one in which mi is the skin membrane or the root node, and secondly when mi is any node. It is not difficult to see that the second case is a generalization of the first one. We shall start with the algorithm for the simple case (when mi is the skin membrane), for easy understanding. Very often in a distributed computing model, the root node has to broadcast a certain message, say M . Here is the prose description of the algorithm. The skin membrane, ms first sends the message to all its children. Upon receiving a message from its parent, the membrane sends the message to all its children, if any. Here is the formal pseudo-code for this algorithm. Algorithm 9 Skin Membrane Broadcast Initially M is in transit from ms to all its children. Code for ms : (1) Upon receiving no message: (2) terminate Code for mi , 0 ≤ i ≤ n − 1, i 6= s: (3) Upon receiving M from its parent: (4) send M to all its children (5) terminate Analysis Message Complexity. As is evident from the above algorithm, the message is communicated from a parent membrane to a child membrane exactly once. The algorithm terminates after sending M once to all its children. Thus, the total number of messages passed is equal to the number of edges in the spanning tree structure. Since, the number of edges in a spanning tree with n nodes is n − 1, we obtain the result that n − 1 messages are passed in a P system with n membranes. Message complexity of the algorithm is therefore, O(n). Time Complexity. To understand the time complexity we shall first prove the following lemma. Lemma 5.3. In every admissible execution of the skin-broadcast algorithm, every membrane at level l i.e. at a distance of l edges from the root node in the spanning tree receives the message M in l time. Proof. The proof proceeds by induction on the distance l of a membrane from ms . The basis is l = 1. From the description of the algorithm, each child membrane of ms receives M from ms in time 1.
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
313
We now assume that every membrane at l level, l > 1 from ms receives the message M in time l. We need to show that every membrane/node mi at distance l from the root node/skin membrane in the spanning tree receives M in time l. Let mj be the parent of mi in the spanning tree, i.e. mj contains membrane mi . Since, mj is at a distance l − 1 from ms , by the inductive hypothesis, mj receives M in time l − 1. By the description of the algorithm, mj then sends the message to mi in the next time step. By the above lemma, a P system with l levels of membranes will have a time complexity of l. This corresponds to a depth of l in a spanning tree configuration. In the worst case, when the spanning tree is a chain, there can be at most n − 1 levels for a system with n membranes. This shows that the time complexity of any P system for skin-broadcast is O(n). To summarize: There is a skin-broadcast algorithm for P systems with message complexity n − 1 and time complexity l, when l levels of membranes are present. 4.2. Generalised Broadcast. Having seen broadcast for the skin membrane, we shall now move on to a more general broadcast in which any membrane/node can broadcast a message. The prose description proceeds as follows. Any membrane, mi which needs to broadcast sends the message M to its parent and all its children, if any. A membrane, mj upon receiving a message from its parent, sends it to its children. If it however, receives the message from its child, then it sends the message to all its other children, and parent, if any. The algorithm is given as follows. Algorithm 10 Generalized Broadcast Say, membrane ma , 0 ≤ a ≤ n − 1 needs to send the message to all the membranes (nodes) in the system. Code for ma : (1) if (a 6= s) (2) send M to its parent (3) send M to all children Code for mi , 0 ≤ i ≤ n − 1, i 6= a: (4) Upon receiving M from its parent: (5) send M to all children (6) Upon receiving M from its child: (7) if(i 6= s) (8) send M to its parent (9) send M to all children but the sender Analysis Message Complexity. Similar to the skin-broadcast algorithm, the message is communicated on a parent-child membrane channel exactly once. The communication on a given link is either initiated by a child or a parent but not both. Thus, the total number of messages passed is equal to the
314
5. MODELLING POWER OF MEMBRANE SYSTEMS
number of edges in the spanning tree structure. Since, the number of edges in a spanning tree with n nodes is n − 1, we obtain the result that n − 1 messages are passed in a P system with n membranes. Message complexity of the algorithm is therefore O(n). Thus, we see that message complexity of a generalized broadcast is equal to that of skin-membrane broadcast. Time Complexity. To understand the time complexity we shall first prove the following lemma. Lemma 5.4. In every admissible execution of the generalized broadcast algorithm, every membrane at level l i.e. at a distance of l edges from the root node in the spanning tree sends the message M to the root node in l time. Proof. The proof proceeds by induction on the distance l of a membrane from ms . The basis is l = 1. From the description of the algorithm, each child membrane of ms sends M to ms in time 1. We now assume that a membrane at l level, l > 1 from ms takes l time to send the message M to the skin membrane. We need to show that every membrane/node mi at distance l from the root node/skin membrane in the spanning tree sends M in time l. Let mj be the parent of mi in the spanning tree, i.e. mj contains membrane mi . Since, mj is at a distance l−1 from ms , by the inductive hypothesis, mj sends M in l − 1 time. By the description of the algorithm, mj receives the message from mi in 1 time step. Thus, l time units are needed to send the message from a membrane mi to the skinmembrane. Once the skin-membrane receives a message, it takes l more time units by the lemma mentioned in skin-broadcast algorithm. Thus, a P system with l levels of membranes will have a worst case time complexity of 2 × l. In the worst case, when the spanning tree is a chain, there can be at most n − 1 levels for a system with n membranes. This shows that the time complexity of any P system for skin-broadcast is O(n). To summarize: There is a skin-broadcast algorithm for P systems with message complexity n − 1 and time complexity l + k, when l levels of membranes are present and the membrane at k th level broadcast. 4.3. Convergecast. The broadcast problem mentioned above aims at dispersing information held by a membrane to other membranes of the system. Convergecast, on the contrary, aims at collecting information from all the membranes of the system to the skin membrane. Many variants of the problem are available, for example, forwarding the sum of all the values held by membranes, or forwarding the maximum value, etc. In a general convergecast algorithm, instead of the result of a particular operation, all the values are forwarded. An important thing to note in generalized convergecast is that the size of message increases as the message progresses to the skin membrane. For simplicity we consider the algorithm of forwarding the sum of all the values held by membranes. As can be seen, unlike broadcast which is initiated by the membrane that wishes to disseminate information, convergecast is initiated by the leaves -
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
315
membranes which contain no inner membranes. This algorithm is recursive, and requires each membrane mi to forward the sum of values held by its membrane. In other words, the sum of the sub-tree rooted at it. A membrane collects all the values held by its inner membranes, and computes the sum including its own values. This sum, si is then forwarded to its parent membrane. Clearly, each membrane has to receive a sum from each of its children before it can forward the sum to its parent. The pseudo-code for the algorithm is given below. Algorithm 11 Convergecast Code for leaf membranes: (1) Starts the algorithm by sending its value xi to its parent. Code for non-leaf membranes mi with k children: (2) Waits to receive messages containing si1 , . . . , sik from its children mi1 , . . . , mik . (3) Computes si = xi + si1 + . . . + sik (4) if (i 6= s) (5) Sends si to its parent. Analysis The analysis of this algorithm is analogous to the skin-broadcast algorithm, since the only difference in the two is the direction of message flow. Message Complexity. As mentioned in the skin-broadcast algorithm, the message complexity of the algorithm is n − 1. Time Complexity. The time complexity of the algorithm is O(n), since at most n − 1 levels may be present in a P system with n membranes. There is a convergecast algorithm for P systems with message complexity n − 1 and time complexity l, when l levels of membranes are present. 4.4. Leader Election in Membrane Systems. The existence of a leader in a P system can often simplify the task of co-ordination among the membranes. It might often be useful to have a leader (other than the skinmembrane as the default). It might also be the case that the criterion for leadership may not always be met by the skin-membrane. The leadership election problem, generally refers to the general class of symmetry breaking problems. The most general variant of it requires exactly one node from a system of many initially similar nodes to declare itself the leader, while the others recognize the leader and declare themselves not-elected. Theorem 5.5. It is impossible to solve the leadership election problem in a system where the membranes are anonymous. We assume that every membrane in the system has one unique identifier id. An algorithm is said to be uniform, if it does not depend on the size of the system. And conversely, non-uniform algorithms rely on the knowledge of the size of the system.
316
5. MODELLING POWER OF MEMBRANE SYSTEMS
4.4.1. A Simple Algorithm. The most straight-forward way to solve the problem is that every membrane sends a message with the maximum id among all its children (and itself) to its parent, and waits for a response from its parent. The skin membrane in turn, would reply to all its children with a message containing the maximum id that it received. Ultimately, one membrane (which receives its own id back) will be elected leader. Algorithm 12 Leader Election Initially: elected = false, children = set of children membrane, and parent = the parent membrane. (1) For every membrane Mk (2) 1. If children = empty, send id to parent (3) 2. Upon receiving idj from all children, (4) subW inner = max (id1 , id2 , . . . , idn , id). (5) 3. If parent 6= null (6) then Send subW inner to parent (7) else skin membrane (8) Send subW inner to all children (9) 4. Upon receiving message leader from parent (10) if leader = id then elected = true (11) 5. if children = empty, terminate (12) else send leader to all children and terminate. Analysis Message Complexity. The message complexity of the above algorithm is O(n), since every “link” (where a link is an imaginary path between the parent and the child) sees the exchange of 2 messages. In a system with n membranes, there are n − 1 such “links”. Thus, the message complexity is O(n). The leadership algorithms in asynchronous rings have a lower bound of O(nlogn). Whereas, in the synchronous case, a message complexity of O(n) can be achieved at the cost of the time-complexity. Generally, the P systems are structured like a tree, already providing a sufficient edge to breaksymmetry as compared to a ring, where every substructure of the ring is symmetrical. 4.5. Mutual Exclusion in Shared Memory. Shared memory is another major communication model and we shall see how P systems can be used effectively to model this as well. In a shared memory, processors communicate via a common memory area that contains a set of shared variables, which are also referred to as registers. In natural computing using P systems, as said above, membranes have access to the same blood stream and mutual exclusion can thus be easily modelled. Before we proceed to understand mutual exclusion algorithms, we need to define the formal model for a shared memory system. We assume we have
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
317
a system with n membranes, m0 , m1 , . . . , mn−1 and m registers or shared variables, R0 , . . . , Rm−1 . Each register has a type which can specify the following: (1) the values which a register can take, (2) the operations that can be performed on it, (3) the value returned by the operation, and (4) the updated value of the register as a result of the operation. Besides this, an initial value for the register has to be specified. Another important distinction in shared memory systems comes when analyzing algorithms. Unlike in message passing models, message complexity is meaningless. On the other hand, space complexity becomes relevant in this model. Space complexity can be measured in two ways: • number of registers used, and • number of distinct values the register can take. Measuring time complexity of shared memory algorithms are still a current research area and we shall only focus on whether the number of steps in the worst case running of the algorithm is infinite, finite, or bounded. 4.6. Mutual Exclusion Problem. The mutual exclusion problem is one where different membranes need access to a shared resource that cannot be used simultaneously. Some relevant terms in the section are given below: • Critical Section: Code segment that has to be executed by at most one membrane at any time. • Deadlock: A situation in which when one or more membranes are trying to gain access to a critical section, and none of them succeeds. • Lockout: When lockout occurs, a membrane trying to enter its critical section never succeeds. A membrane might need to execute some additional code segments before and after the critical section. This is to ensure mutual exclusion. The relevant sections of the code are: • Entry: Code section where the membrane prepares to enter critical section. • Critical Section: Code section which has to be executed exclusively. • Exit: Code section executed when a membrane leaves the critical section. • Remainder: Remainder of the code. Mutual exclusion, no deadlock and no lockout is achieved if following hold: • Mutual Exclusion: In every configuration of every execution at most one membrane gets access to critical section. • No Deadlock: In every admissible execution, when membranes are in the entry section, at a later stage, a membrane is definitely in the critical section.
318
5. MODELLING POWER OF MEMBRANE SYSTEMS
• No Lockout: In every admissible execution, a membrane trying to enter the critical section, eventually gets an entry. The test-and-set and read-modify-write are powerful primitives used to achieve mutual exclusion. Another commonly known algorithm is the Bakery algorithm which uses read/write registers. A very appropriate algorithm for P systems would be the tournament algorithm. The conventional tournament algorithm can be modified to suit P systems. The tournament algorithm is a bounded mutual algorithm for n processors. It is based on selecting one among two processors at every stage, and thus selecting one among n processors in ⌈ log 2 n⌉ stages. The algorithm is therefore recursive and every processor that succeeds at a stage climbs up the binary tree. The processor to reach the root gains entry to the critical section. An example with 8 processors is presented below:
The algorithm for the P system can be a modified version of the above algorithm. Conceptually, membranes within a parent membrane can select one among themselves using the tournament algorithm, and the parent can then forward the request to its parent in turn . The one membrane that succeeds to reach the skin level gains entry to the critical section. The number of rounds k in this algorithm will be equal to the number of levels l of the system. The pseudo-code for the algorithm is mentioned below. The conventional algorithm is used by the term tournament and list of membranes list is passed to it. The algorithm returns the id of the membrane that succeeds in the tournament algorithm. An important observation is that the first step will not be executed in the algorithm when k = l. This is because there are no parent membranes at depth l since that is the maximum depth of the tree. The above algorithm is ideal for P systems and provides mutual exclusion with no deadlock and no lockout. 4.7. Fault-Tolerant Consensus. For a system to coordinate effectively, often it is essential that every membrane/node within the system agree on a common course of action. With the help of leadership election, and a subsequent broadcast/flooding, it is possible for a consensus to be reached. However, there are times when certain elements of a distributed system don’t quite work the way they should. This section discusses the
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
319
Algorithm 13 Tournament Algorithm l represents the maximum depth of the system, mj is the parent membrane of mi . (1) for k = l to 1 (2) 1) all parent membranes mi at depth k: (3) if (list 6= φ) (4) want = tournament (list) (5) else (6) want = −1 (7) end if (8) send want to its parent, if any. (9) 2) all leaves mi at depth k: (10) if (need access to critical section) (11) want = i (12) else (13) want = −1 (14) end if (15) send want to its parent mj , if any. (16) 3) all parent membranes mi at depth k − 1 (17) for (o = 0 to c − 1) //c is the number of children membranes (18) receive wio from mio (19) if (wio 6= −1) (20) add wio to list (21) end for (22) end for consensus problem and fault tolerance (viz. reaching a consensus despite having failures within parts of the system). Consider a system in which each membrane mi needs to coordinate with the rest of the processors and choose a common course of action, i.e. agree upon a value for the variable decision. A solution to the consensus problem must guarantee the following: • Termination: In every admissible execution, all the non-faulty nodes must eventually assign some value to decision. • Agreement: In every admissible execution, all the non-faulty nodes must not decide on conflicting values. • Validity: In every admissible execution, all non-faulty nodes must make the correct decision, i.e. must choose the correct value for decision. (In other words, that if the consensus problem in question is choosing the maximum value from a certain set of numbers, then the decision must actually be the maximum value from the given set of input values. Clearly the consensus problem is an important one, and the process would be disturbed in the presence of nodes which behave in an undesirable manner.
320
5. MODELLING POWER OF MEMBRANE SYSTEMS
However, within certain restrictions, it is possible to achieve a fault-tolerant consensus. 4.7.1. Failures. A failure is said to occur when a processor behaves abnormally. There are two basic types of failures. • The Simple Failure: In such a failure, nodes within the system, just stop functioning/crash, and don’t ever recover. Wrong operations are never performed. • The Byzantine Failure: In this notorious subset of failures, the failed processor may behave in an unpredictable manner, contrary to a process which would help to reach a consensus. The Simple Failure Case The most important parameter which needs to be determined is f , the maximum number of nodes that can fail so that the consensus may still be achieved. Such a system is called an f -resilient system. Lemma 5.6. In every execution at the end of f+1 rounds, all non-faulty nodes have the same set of values to base their decision upon. Theorem 5.7. It takes an upper bound of f + 1 rounds to solve the consensus problem with simple failures in an f-resilient system. Algorithm 14 Fault Tolerant Consensus: Simple Failure Case Initially, every processor pj has some value xj which it needs to send to all other processors and ultimately reach a consensus based on these values. In every round k, (1 ≤ k ≤ f + 1), pi behaves as follows: (1) Send xi to all nodes within parent’s membrane (unless node = skin membrane). (2) Receive xj from pj . (3) Add xj to an array/vector V . (4) If k = f + 1 make decision based on the values stored in V The Byzantine Failure Case In the Byzantine case, the faulty nodes behave arbitrarily and sometimes even maliciously. Thus, it becomes difficult to distinguish between a functional and a Byzantine node, because unlike a node that crashes and simply stops sending messages, a Byzantine node continues to send messages which may hamper the consensus process. Lemma 5.8. In a system with 3 nodes, having 1 Byzantine node, there is no algorithm to solve the problem. Theorem 5.9. The ratio of faulty to non-faulty processors in a system with Byzantine failures must be at most 1:3 if the consensus problem is to be solved.
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
321
Theorem 5.10. To solve consensus in an f -resilient system, every nonfaulty node must send at least f + 1 messages to all other non-faulty nodes to meet the requirements of the consensus problem. 4.8. Example and Implementation. In this section we present an example describing an immune response system against virus attacks. This example is implemented using a P system library to emulate the main functions of a P system, and an MPI program that takes advantage of the highly parallel features provided by the model. • When a virus enters a membrane (consisting of many cellular membranes within it - e.g. a human body), it tries to destroy the host membrane and all the cells contained within it by replication and propagation. • The immune response system of the membrane is in charge of producing suitable antibodies which can counter-attack the virus. • In the event that the anti-body is not present, it needs to be transported (through inter-cellular communication) from the point it is produced to the place it is required. 4.8.1. Terms. • Immune Response: The immune response is the way the body recognizes and defends itself against microorganisms, viruses, and substances recognized as foreign and potentially harmful to the body. • Antibody: Antibodies are special proteins that are part of the body immune system. White blood cells produce antibodies to neutralize harmful germs or other foreign substances, called antigens. • Virus: An infectious particle composed of a protein capsule and a nucleic acid core, which is dependent on a host organism for replication. • Virus Propagation: The process by which the virus multiplies and sends copies of itself to the inner cells. • Virus Neutralization: The process by which the virus is deactivated (destroyed) by the corresponding antibody. • Clean cell: A cell which is free from any viral infection • Infected cell: A cell which has at least one virus present in it. • Virus Maturity Period: This is the time duration in which the virus is inactive, after which it regularly replicates. • Virus Propagation Period: A mature virus regularly replicates after every P time units, where P is defined as the virus propagation period. Problem Definition. • Given an initial layout, it is useful to know that when an equilibrium is reached - all the cells are clean or some cells remain infected.
322
5. MODELLING POWER OF MEMBRANE SYSTEMS
• From a pharmaceutical point of view, this tells us whether the membrane (organism) requires any external medicinal supply (or is the membrane strong enough to resist the virus). • Certain configurations would become worse at every subsequent phase, without any external medication (for e.g. as in cancer). The detection of such patterns in its early stages, would increase the chances of cleaning the cells. Membrane Perspective. • The organism which the virus infects is represented as the skin membrane in the P system • The cellular-hierarchy of the organism is modelled as the treestructure of the membranes. For e.g. an organ system (depicted as a membrane) has several organs (depicted as its sub-membranes). • The virus and antibodies are the objects in the system. • The virus and antibody properties (e.g. the type and life) are the symbols of the objects. Rules. (1) The skin membrane has unlimited supply of antibodies of all types. (2) An antibody of type ai is required to neutralize the virus vi . (3) Virus Propagation • Each virus has a maturity period, after which it can reproduce • Thereafter, it reproduces regularly after a fixed propagation period • The virus child thus created may be sent to any one of the children membranes in a random manner. (4) A membrane requiring an antibody sends a request to its parent for that particular antibody. 4.8.2. Distributed Computing Approach. For this problem, we can use the following algorithms: • Leadership Election - Identifying whether a cell is clean or infected is analogous to whether the virus wins or not. • Synchronization - Communication between membranes to maintain the required balance of antibodies in order to attain a clean state. 4.8.3. Equilibrium State Determination. • Determining whether the virus will survive or not is not a trivial problem. The sufficient condition for the equilibrium state to be clean can be written as M > 2D, where M = Maturity period of the virus, and D = maximum Distance between a virus and the skin. This is easy to see as it will take exactly 2 × D time for a membrane to receive the antibody after it has requested for it. If the above condition is satisfied, the virus will be destroyed before it reproduces. This condition has to be true for all the membranes.
4. MEMBRANE SYSTEMS AND DISTRIBUTED COMPUTING
323
• However, the necessary condition is a lot more complicated. Since, some antibodies may be present in earlier stages as well (e.g. one of the membranes other than the skin membrane) the membrane can receive the antibody earlier. Thus, in practice a precise analysis is required. • Each child virus will have its own life-cycle of maturing and reproducing simultaneously - making the mathematical formulation rather complicated The following table shows the number of viruses at different times. In this example M = 3 and P = 1. Time 0 1 2 3 4 5 6 7 8 9 10
Total New Mature 1 1 0 1 0 0 1 0 0 2 1 1 3 1 1 4 1 1 6 2 2 9 3 3 13 4 4 19 6 6 28 9 9
• From the above table, we can observe that n(t) = n(t−1)+n(t−3) - generalizing, we get: n(t) = n(t − P ) + n(t − M ) • It is possible to express n(t) as a polynomial function of t. The coefficients of the polynomial expression are provided by the the roots of the equation: tM − tM −P − 1 = 0 • It is very difficult to solve this equation manually. However, a recursive algorithm can easily be implemented at the start of the simulation to decide the state. Algorithm. In each round (1) Exchange of antibodies (a) Every parent sends its antibodies to its children as requested in the previous round (b) Every child receives antibodies as sent by parent. (2) Virus propagation (a) Increment Virus life and check for reproduction (b) Send virus to children if required (randomly) (c) Receive virus from parent (if not skin) (3) Compute leader - for each round, check if virus dominates or is destroyed (4) Synchronization - compute and send antibody requests to all parents.
324
5. MODELLING POWER OF MEMBRANE SYSTEMS
Implementation. (1) A P system library was written to emulate the functions of a P system. Here, we assume that each membrane is a processor. (2) An MPI program was written for the simulation of the various rounds of the system, (viz. implementation of the algorithm presented in the previous round) (3) An interface was written to create an additional level of abstraction between the library and the MPI program - to hide the implementation details of the P system library. (4) An automated Graphical output is generated at the end of the simulation making use of XFig and LATEX. The output at the end of each round is used to encode an XFig diagram, and these are included in a LATEXpresentation.
V5 V5 V2 V2 V2
A7 A6 A4 A2 A1 A3 A5 A6 A7
A5 V7 V6 A2
A1 A4 A2 A3
Figure 10. Example of an Output Diagram Our implementation is written in C and uses the parallel environment provided by the MPI library. MPI stands for Message Passing Interface. MPI is a standard developed to enable us to write portable message passing applications. It provides functions for exchanging messages and many other activities as well. MPI is implemented on both share-memory and distributed-memory parallel computers. Though many new parallel languages and environments are introduced, MPI is still very popular in the field of parallel computing. MPI is a library of functions and macros that can be used in the Fortran, C, C++ and Java programs. MPI programming means that you write your program in C, C++ or Java, and when the time comes for parallel processes to communication or synchronize, you should
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
325
explicitly call the MPI send or receive function to help. MPI send function sends a message to the named target process, while in the target process, a correspondent receive function must be set to make the corresponding work. MPI is quite easy to use. You need to master around six commands to write simple programs; they are MPI Init(), MPI Comm rank(), MPI Comm size(), MPI Send(), MPI Recv() and MPI Finalize(). To use the MPI system and functions, the header file mpi.h should be included. Different processes are identified with their task ID’s; the MPI system assigns each process a unique integer called as its rank (beginning with 0). The rank is used to identify a process and communicate with it. Each process is a member of a communicator; a communicator can be thought of as a group of processes that may exchange messages with each other. By default, every process is a member of a generic communicator environment (it could be interpreted as the skin in a membrane system). Although we can create new communicators, this leads to an unnecessary increase in complexity. The processes can be essentially identical, i.e. there is no inherent master - slave relationship between them. So it is up to us to decide who is the master and who are the slaves. A master process can distribute data among the slaves. Once the data is distributed among the slaves, the master must wait for the slaves to send the results and then collect their messages. Packing and decoding is handled by MPI internally. The code for the master as well as the slaves could be in the same executable file. More details can be found in [85]. 5. Membrane Computing Software Simulators Since there do not exist implementations of membrane systems in wet laboratories (neither in vitro or in vivo), it seems natural to look for software simulators of P systems. Such software implementations of membrane systems are useful when trying to understand how a system works. Based on such an implementation, it is possible to get formal verifications of certain behavioural properties of the membrane systems. Finally, these simulators can be used in the biological applications of membrane computing. 5.1. Simulators of Active Membranes. We present first a software implementation of P systems with active membranes. Polynomial time solutions to NP-complete problems via P systems with active membranes can be reached by trading time with space. This is done by using membrane division to produce an exponential number of membranes which can then work in parallel. The simulation of the P systems with active membranes has to deal with the potential growth of the membrane structure and adapt dynamically the topology of the configurations depending on whether some membranes are added or deleted. Due to the limitations of computational resources, the implemented P systems with active membranes have small size.
326
5. MODELLING POWER OF MEMBRANE SYSTEMS
In [CP] we have presented a software implementation which provides a graphical simulation for two variants of P systems: for the initial version of catalytic hierarchical cell systems, and for P systems with active membranes. Its main functions are: • • • •
interactive definition of a membrane system, visualization of a defined membrane system, a graphical representation of the computation and final result, and save and (re)load of a defined membrane system.
Figure 11. Screenshot of the Simulator. The system is presented to the user with a graphical interface where the main screen is divided into two windows: The left window gives a tree representation of the membrane system including objects and membranes. The right window provides a graphical representation of the membrane system given by Venn-like diagrams. A menu allows the specification of a membrane system for adding new objects, membranes, rules, and priorities. By means of the functions Start, Next, and Stop, the user can observe the system evolution step by step. The application was implemented in Microsoft Visual C++ using MFC classes. For a scalable graphical representation, Microsoft DirectX technology was used. One of the main features of this technology is that the size of each component of the graphical representation is adjusted according to the number of membranes of the system.
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
327
5.2. Parallel Simulators. One of the main difficulties in the simulation of P systems in current computers is that the computational power of these devices lies in their intrinsic massive parallelism. We have implemented a simulators based on parallel architecture which is close to the membrane computing paradigm. In [CG] we present a parallel implementation of transition P systems. The implementation was designed for a cluster of computers. It is written in C++ and makes use of Message Passing Interface (MPI) as its communication mechanism. MPI is a standard library developed for writing portable message passing applications, and it is implemented both on shared memory and on distributed memory parallel computers. The program was implemented and tested on a Linux cluster at the National University of Singapore. The cluster consisted of 64 dual processor nodes. The implementation is object-oriented and involves three components: • class Membrane, which describes the attributes and behavior of a membrane, • class Rule, which stores information about a particular rule, and • Main method, which acts as central controller. The rules are implemented as threads. At the initialization phase, one thread is created for each rule. Rule applications are performed in terms of rounds. To synchronize each thread (rule) within the system, two barriers implemented as mutexes are associated with the thread. A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled when it is owned. At the beginning of each round, the barrier that the rule thread is waiting for is released by the primary controlling thread. After the rule application is done, the thread waits for the second barrier, and the primary thread locks the first barrier. Since each rule is modeled as a separate thread, it should have the ability to decide its own applicability in a particular round. Generally speaking, a rule can run when no other rule with higher priority is running, and the resources required are available. When more than one rule can be applied in the same conditions, the simulator randomly picks one among the candidates. With respect to synchronization and communication, the main communication for each membrane is done by sending and receiving messages to and from its parent and children at the end of every round. With respect to termination, when the system is no longer active there is no rule applicable in any membrane. When this happens, the designated output membrane prints out the result and the whole system halts. In order to detect if the P system halts, each membrane must inform the other membranes about its inactivity. It can do so by sending messages to other membranes, and by using a termination detection algorithm [6]. 5.3. WebPS: A Web-based P Systems Simulator. We present in more details WebPS, an open-source web-enabled simulator for P systems.
328
5. MODELLING POWER OF MEMBRANE SYSTEMS
This simulator is based on CLIPS, and it is available as a web application. The P accelerator is used to parallelize the existing sequential simulators of the P systems. We exemplify this tool; the speedup and the efficiency of the resulting parallel implementation are surprisingly close to the ideal ones. Combining these two ingredients, we get a complex and ready-to-use parallel simulator for P systems. The main feature of WebPS is given by its availability as a Web application. As any Web application, WebPS does not require an installation. It can be used from any machine anywhere in the world, without any previous preparation. A simple and easy to use interface allows the user to supply an XML input both as text and a file. A friendly way of describing P systems is given by an interactive JavaScript-based P system designer. The interface provides a high degree of (re)usability during the development and simulation of the P systems. The initial screen offers an example, and the user may find useful documentation about the XML schema, the rules, and the query language. The query language helps the user to select the output of the simulation. This simulator is based on CLIPS. Some existing CLIPS implementations represent P systems rules by CLIPS facts; we get a significantly faster execution by using CLIPS rules to implement P systems rules. P systems are input data for WebPS, and they are described by XML documents; this fact provides a standard method to access information, making it easier to use, modify, transmit and display. XML is readable and understandable, it expresses metadata easily in a portable format, just because many applications can process XML on many existent platforms. Moreover, by using XML, it becomes easy to define new features and properties of P systems. XML allows an automated document validation, restricting wrong input data, and warning the user before execution with respect to the possible errors in their P systems description. From a user point of view, all these features facilitate an efficient description and reconfiguration of the P systems. The simulator is open-source, actually free software, as it is being offered at http://psystems.ieat.ro under the GNU General Public License. This allows anyone to contribute with enhancements and error corrections to the code, and possibly develop new interfaces for the C and CLIPS level APIs. These interfaces can be local (graphical or command-line), or yet other Webbased ones. 5.3.1. WebPS Software Architecture. The software structure of the simulator has three distinct levels. The inner level is the CLIPS level. This level integrates part of the knowledge about the theoretical description of P systems, the result being a library of CLIPS functions, meta-rules and templates. Since CLIPS is easily embeddable in C (as its name ”C Language Integrated Production System” suggests), we can control the CLIPS level from a C program, and we include some example to illustrate how this is done. The C level is strengthened by introducing a C library for modelling and simulating P systems based on the CLIPS library. The Web application
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
329
level offers a user-friendly interface to the simulator, and can become, after the addition of debugging and visualization features a powerful P system development tool. HTML input form for XML
JavaScript P system Designer
PHP script
CGI program(C)
CLIPS library
LibCGI
PS−XML C library
CLIPS Level. This is the core level of the simulator. We address here some crucial issues, namely how we implement in a sequential context the maximally parallel and nondeterministic execution of P systems. The maximally parallel execution requirement relies on constraining our simulation cycle to the following distinct steps: (1) React step: where the activated reaction rules are sequentially executed. (2) Spawn step: where the new objects created by rules inside their membranes are asserted as object facts; they become visible for a future React step. (3) Communication step: where the objects injected or ejected by communication rules in various membranes are asserted as objects. (4) Divide step: it handles possible divisions processes of membranes. (5) Dissolve step: it handles dissolving processes of membranes. By constantly recording the state of the P system after each React, Spawn and Communication steps, we can get a trace of an execution. CLIPS uses RETE algorithm to process rules; RETE is a very efficient mechanism for solving many-to-many matching problem [40]. The nondeterministic execution requirement is fulfilled by the CLIPS random mechanism. We have looked closely at the random strategy, and we found it makes the same choice for the same configurations in different executions. By calling the random function of CLIPS, the random mechanism uses the random number generator. The failure is related to the improper seeding of the random number generator. We have corrected this error, and properly seed the random number generator by using /dev/urandom, the entropy gathering device on GNU/Linux systems. This aspect can be also useful for other CLIPS implementations possibly affected by the same failure. An important choice regarding an implementation is given by data representation. We decide to represent P system objects and membrane structure
330
5. MODELLING POWER OF MEMBRANE SYSTEMS
as CLIPS facts (with set-fact-duplication option of CLIPS set to on), and the P systems reaction rules as CLIPS rules. This contrasts the previous implementations which have represented reaction rules as CLIPS facts; while their choice might allow general meta-rules for execution, we get a flexible and efficient framework by representing reaction rules as CLIPS rules. The efficiency is gained by making direct use of the CLIPS pattern-matching mechanism, and rule activation capabilities. This choice is also confirmed by the efficiency of the dissolve and divide operations which imply a lot of moving and copying. It is worth noting that the simulator supports division, promoters/inhibitors, and symport/antiport rules for membranes. For example, the P system transition rule a+b→c+d(2)+e(0) with priority 11 from membrane 1 is converted into the following CLIPS rule: (defrule MAIN::1_a+b->c+d[2]+e[0] (declare (salience 11)) (do (what react)) (or (parent-child (parent 1) (child 2)) (parent-child (parent 2) (child 1))) (or (parent-child (parent 1) (child 0)) (parent-child (parent 0) (child 1))) ?a-0 <membrane name="1"> Web Level. The Web level of the simulator allows to choose between a user-friendly P system designer written in JavaScript and a traditional HTML input form transmitting an XML description by uploading a file or by editing. Aside from the XML P system description editing, the user can specify a number of executions of the P system. Our JavaScript P system Designer aims to facilitate the description of the P system without requiring the user to write XML, but generating it based on the user’s interaction with a dynamic interface. After the user introduces a XML description, it is transmitted to a PHP script which does some further processing, and sent then to a CGI program written in C. The C program uses a specific P system XML library called PS-XML, as well as LibCGI and CLIPS library in order to simulate the evolution of the P system. Finally it returns the results to the user. It is possible to select various information provided as results, and in order to help the user to select the desired information we define a query language called PsQL. 5.3.2. P Systems Query Language. We define PsQL as an SQL-like language for querying the state of a P system. We developed a CLIPS library for parsing and interpreting this language. At the Web level, the queries can be included in the XML input; these queries are activated after the execution of the specified P system. If it does not exist any query, the P system is simulated, but no output is generated. At the CLIPS level it is possible to specify queries for the P system in a dynamic manner, not just before starting the simulation. At the syntactic level, PsQL is a Lisplike language, and it is supported by a small CLIPS library of list-handling functions. The Backus-Naur description of PsQL is presented here: ::= <expression> | | <membranes-query> |
332
5. MODELLING POWER OF MEMBRANE SYSTEMS
::= "(" count-of | <membranes-query> ")" ::= "(" objects-from <membranes-spec> [ where <where-spec> ] ")" <membranes-query> ::= "(" membranes-from <membranes-spec> [ where <where-spec> ] ")"
The full description is available at http://twiki.ieat.ro/twiki/bin/ view/Institut/PSystemsQueryLanguage. We plan to extend PsQL with trace facilities; having queries on the possible traces during an execution represents a step towards an automated verification of the P systems. 5.3.3. Examples. The first example is a P system that computes the multiplication of two natural numbers. The following figure describes graphically the P system. 0 m
b
1
b+v e+u
e+v+d > v b+u+d > u
an v a+v a+u
u(1) v(1)
v(0) u(0)
As inputs we consider the number of a objects in membrane 1 , and the number of b objects in membrane 0 . The result is given by the number of d objects in membrane 0 . This P system differs from other similar ones by that it does not have exponential space complexity, and does not require active membranes. As a particular case, it would be quite easy to compute n2 by just placing the same number n of a and b objects in its membranes. Another interesting feature is that it may continue computing the multiplication after reaching a certain result. Thus if initially there are m b objects and n a objects, the system evolves and reaches a state with n · m d objects in membrane 0 . If the user wish to continue in order to compute (n + k) · m, it is enough to inject k a objects in membrane 1 at the current state, and the computation can go on. Therefore this example emphasizes a certain degree of re-usability. Recursive Sum. P The P system described in the next picture computes the recursive sum ni=1 ki . 0
1
2 a k1 a
n ak2
a(0)
a
a(0)
...
a kn a
a(0)
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
333
The numbers of a objects in the membranes 1...n are the addition arguments, and the result of the computation is the number of a objects in membrane 0. The PsQL query to determine this result is: (count of (objects from 0)). While this example is rather trivial, it illustrates the expressiveness of the query language (PsQL). Using PsQL queries, it is not necessary to apply the rules and execute the specified P system. Given the initial multiset, the same recursive sum is obtained by using the following query: (count of (objects from (membranes from 0))). Dot Product of Vectors. Combining the previous two examples, we can compute the dot (scalar) product x · y of two vectors x, y ∈ Nm , where m ∈ N. Let us denote the components of the vectors by xi and yi , respectively; these components are given by the number of b and a in the membranes labelled by 2i and 2i + 1, respectively. Then x · y is given by the number of d objects obtained in membrane 0 after the P system halts. This P system is described graphically as: 0 1
2k+1 bx1 b+v e+u
e+v+d(0) > v b+u+d(0) > u
...
2 y1
a v a+v a+u
bxk+1 b+v e+u
v(2) u(2)
v(1) u(1)
e+v+d(0) > v b+u+d(0) > u
...
2k+2 y
a k+1 v a+v a+u
v(2k+2) u(2k+2)
v(2k+1) u(2k+1)
where k = 0, m − 1. The PsQL query for retrieving the result is: (count of (objects from 0)). 5.4. On Parallelization of Sequential Simulators. There will always be a need for exploiting parallelism in computing, such that many difficult problems can be executed on parallel architectures. There is also a need to free the programmers from thinking about the parallelization of existing sequential programs. Automatic parallelization of code has been an active research topic in scientific computing for some decades. For a long period of time, the kind of attempts achieve little of the potential benefit of parallel computing. Recent contributions improve the performance, and efficient parallel codes have been created automatically to solve problems in various fields. We refer to the sequential simulators of the P systems, particularly to those implemented in CLIPS. These sequential simulators have both didactic and scientific values. However the P systems are inherently parallel and, in many variants, they also exhibit an intrinsic non-determinism, hard to be captured by sequential computers. By simulating parallelism and nondeterminism on a sequential machine, one can lose the real power of parallelism
334
5. MODELLING POWER OF MEMBRANE SYSTEMS
and attractiveness of P system computing. Therefore the simulations on multiple processors represent a particularly interesting subject. Currently the only known parallel and cluster implementation for P systems is presented in [CG], using C++ and MPI. Jess is the abbreviation for Java Expert System Shell; it is a rulebased programming environment for Java platforms; it is available at http://herzberg.ca.sandia.gov/jess/. More information about Jess can be found in [43]. Jess uses the RETE algorithm [40] for rules, an efficient mechanism for solving the difficult many-to-many matching problem. Our strategy is to build a parallel production systems with a high degree of flexibility, namely to construct a wrapper for the parallel system which allows cooperation between its instances. From this point of view, Jess has been considered mainly due to its flexibility (communication via sockets, ability to create Java objects, and call Java methods) and its compatibility with the C Language Interface Production System CLIPS; CLIPS is available at http://www.ghg.net/clips/CLIPS.html, where it is presented as a tool for building expert systems. Jess is also selected because of its active development and support, tight interaction with Java programs, and expressiveness. The Jess rule language includes elements not present in many other production systems, such as arbitrary combinations of Boolean conjunctions and disjunctions. Its scripting language is powerful enough to generate full applications entirely within the Jess system. Jess is also a powerful Java scripting environment, from which one can create Java objects, and call Java methods without compiling any Java code. The core Jess language is fully compatible with CLIPS (Jess scripts are valid CLIPS scripts, and vice-versa). Jess adds many features to CLIPS, including backwards chaining, working memory queries, and the ability to manipulate and directly reason about Java objects. We think to consider existing implementations, and improve them by using multiple computing units, leading to faster P system simulators than the current available ones. The proposed architecture for a such system is based on an accelerator, actually a wrapper allowing the cooperation between several instances of a program running on different computers, in order to speedup the current P system simulators. We apply our P accelerators to WebPS. The set of rules and the configurations in each step of the evolution are expressed as facts in a knowledge base. We first show that a splitting technique of the membranes in several Java threads running embedded Jess can lead to a faster simulation. Then we use the P accelerator to further speedup the simulation. 5.4.1. P Accelerators. Having in mind the parallel evolutions of its membranes based on different rules, we build a parallel distributed memory version of a production system based on task parallelism. The target architecture is a homogeneous cluster of workstations. Building a Jess application based on socket communication facilities is considered to be difficult and can
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
335
distract the user from its main aim. Therefore a middleware is needed. We adopt a modular scheme composed by Jess instances, Connectors, and Messengers. A first component of the P accelerator is called Connector, and consists of a Java code. Each Jess instance has one corresponding Connector. A Jess instance (acting as a client) contacts its Connector (acting as a server) via a socket. As soon as such a connection is established, the Connector interprets the Jess special incoming requests for communications, and controls other Jess instances (namely send or receive information, launch or kill other instances). An information in transit is a string containing a command written in Jess language. Another component of the P accelerator is represented by the Messengers. Each Messenger is associated with one Connector and its purpose is to execute the commands received by Connector, and to communicate with various Messengers associated with the other Jess instances. We add new commands to Jess, assuring a valid message passing interface. 5.4.2. Implementation Details. A Connector uses the standard Java ServerSockets methods. The current Messenger is written in Java and JPVM, a Java implementation of Parallel Virtual Machine selected due to its ability to dynamically create and destroy tasks, a useful ability when simulating the division and dissolution of membranes. JPVM is available at http://www.cs.virginia.edu/~ajf2j/jpvm.html. Adopting a PVM variant, the user is absolved of the duties to nominate the hosts were the Jess instances are running, as well as to treat sequentially the incoming messages (as in the case of a socket connection), or to check the status of the machines on which the Jess instances are running. Regarding CLIPS, we should mention that a production system consists in a working memory, a set of rules and an inference engine. The working memory is a global database of data representing the system state. A rule is a condition-action pair. The inference engine is based on a three-phase cyclic execution model of condition evaluations (matching), conflict-resolution and action firing. Firing can add, delete or modify elements in the working memory. An instantiation is a rule with a set of working memory elements. In a sequential environment, conflict-resolution selects for firing a certain instantiation from the set of all instantiations. In a parallel environment, multiple instantiations can be selected for firing simultaneously. Considering a high degree of parallelism, the time performance can be improved. The computational power of a P system can be used not for solving small problems for which we have already faster algorithms, but for large and difficult problems. The main goal of our experiments is to measure the efficiency of P accelerator in a cluster environment when it is applied to a particular problem. The experiments use the P system with active membrane to solve the validity problem: given a Boolean formula in conjunctive normal form, to determine whether or not it is a tautology. If we consider the problem
336
5. MODELLING POWER OF MEMBRANE SYSTEMS
ki input in the form ∧m i=1 ∨j=1 xij where xij ∈ {X1 , . . . , Xn , X 1 , . . . , X n } the P system solves the NP-complete problem of validity in 5n + 2m + 4 steps. The number of membranes increases by division from only three initial membranes to 2n + 2 membranes at the end of computation. This example is also of interest for parallel simulation using dynamic task creation. Different membranes can be distributed on different machines of a cluster. They can evolve independently accordingly to the evolution rules. Send-in and Send-out rules request message exchanges. We concentrate our attention to the most consuming part of the simulation, namely after the final division when we have already 2n +2 membranes. The number of rules to be fired is similar to those from the classical production systems benchmarks. The following table specifies the number of rules to be fired in the case of checking the validity of a Boolean expression with m = n.
m × n Membranes Rules fired Time for Reaching Total time after the last firing those the final of simulation division rules configuration 2×2 6 335 1s 1s 2s 3 × 3 10 779 13 s 3s 16 s 4 × 4 18 1835 428 s 6s 434 s 5 × 5 34 ? ? 49 s ? The evaluation of time is measured on one PC of a cluster used in our tests. The cluster is composed of four PCs at 1.5GHz and 256Mb RAM, connected via a Myrinet switch ensuring communication at 2Gb/s. We describe now how the P accelerator is working in order to assure the distribution of the membranes and the communication between rules. The P accelerator can be activated by a user in a simple way. Each Jess instances reads the simulator rules, the facts (the membrane rules to be applied), the membrane structure, and their contents. Each membrane is owned by a Jess instance. An instance can own none, one or several membranes. Rules are fired by an instance only if they are associated to an owned membrane. Each instance has copies of the other instance membranes. A change in the content of the father membrane can lead to a change in the children membrane. The sources and destinations used in the send and receive commands are related to their membrane owners. The send and receive rules are activated only if the involved sources and destinations have different owners. It is easy to note a significant reduction of the running time, even when we use the P accelerator running on one machine of the cluster (i.e., Jess instances are running on the same machine). In the following table, the shorter time is underlined for each number of Jess instances working concurrently. The lowest time depends on the dimension of the problem. It seems that for a m × m problem, the lowest time is given by m Jess instances.
5. MEMBRANE COMPUTING SOFTWARE SIMULATORS
337
Membranes 1 instance 2 instances 3 instances 4 instances 6 1s 1s 3s 3s 5s 10 s 10 13 s 5s 18 428 s 45 s 25 s 33 s 34 Mem.out 1054 s 325 s 200 s Further reduction of the simulation time is expected when our P accelerator is running on several machines. These expectations are confirmed by our tests. The speedup Sp and the efficiency Ep of the parallel implementation are registered in the following table, the values being close to the ideal ones. It is easy to remark a normal increasing of the speedup with the number of rules to be fired. It is expected to obtain even better results for larger dimension of the validity problem. Membranes Instances Machines 6 10 18 34 1 1 1s 13 s 428 s Memory out 2 1 1s 5s 45 s 1054 s 2 1s 3s 23 s 528 s S2 1 1.7 1.9 2 E2 0.51% 0.65% 0.95% 0.99% 3 1 3s 5s 25 s 325 s 3 3s 2s 10 s 126 s S3 1 2.5 2.5 2.6 E3 0.33% 0.83% 0.83% 0.87% 4 1 3s 10 s 33 s 200 s 4 2s 3s 9s 52 s S4 1.5 3.3 3.7 3.8 E4 0.37% 0.82% 0.91% 0.96% To summarize, WebPS is efficient, flexible, and does not require any previous knowledge or expertise in computers. Since the simulator has some novel and interesting features related to efficiency, ease of use and generality, it can become a useful tool for the community, both theoretically and practically. Being GPL licensed, we expect it is eligible to become a simulator benchmark reference. The simulator is available at http://psystems.ieat.ro. We intend to make the simulator available as a web service. Moreover, continuing the commitment to standards compliance, we will strive for SBML compatibility for our specification language. Further improvements are related to better debugging and visualization capabilities (including a flexible, fine and coarse-grained tracer), developing a library of macros and methodologies using the principles of modularity, extensibility and structured design from software engineering, introducing rules for the development of macros for P systems.
Bibliography [BC] D. Besozzi, G. Ciobanu. A P System Description of the Sodium-Potassium Pump. Lecture Notes in Computer Science vol.3365, 211–223, 2005. [BCIP] C. Bonchi¸s, G. Ciobanu, C. Izba¸sa, D. Petcu. A Web-Based P Systems Simulator and its Parallelization. Lecture Notes in Computer Science vol.3699, 58–69, 2005. [C1] G. Ciobanu. Formal Description of the Molecular Processes. Recent Topics in Mathematical and Computational Linguistics, Editura Academiei, 82–96, 2000. [C2] G. Ciobanu. Molecular Structures. Words, Sequences, Languages: Where Computer Science, Biology and Linguistics Come Across, Kluwer, 299–317, 2001. [C3] G. Ciobanu. Distributed Algorithms over Communicating Membrane Systems. BioSystems vol.70, 123–133, 2003. [C4] G. Ciobanu. Software Verification of the Biomolecular Systems. Modelling in Molecular Biology, Natural Computing Series, Springer, 40–59, 2004. [C5] G. Ciobanu. Pumps Systems of Membranes. Proceedings 2nd Brainstorming Week on Membrane Computing, University of Seville, 130–134, 2004. [CCT] G. Ciobanu, V. Ciubotariu, B. Tanas˘ a. A pi-calculus Model of the Na Pump. Genome Informatics, Universal Academy Press, 469–472, 2002. [CDK] G. Ciobanu, R. Desai, A. Kumar. Membrane Systems and Distributed Computing. Lecture Notes in Computer Science vol.2597, 187–202, 2003. [CDHMT] G. Ciobanu, D. Dumitriu, D. Huzum, G. Moruz, B. Tanas˘ a. Client-Server P Systems in Modeling Molecular Interaction. Lecture Notes in Computer Science vol.2597, 203–218, 2003. [CG] G.Ciobanu, W.Guo. P Systems Running on a Cluster of Computers. Lecture Notes in Computer Science vol.2933, 123–139, 2004. [CH] G. Ciobanu, D. Huzum. Discrete Event Systems and Client-Server Model for Signaling Mechanisms. Lecture Notes in Computer Science vol.2602, 175–177, 2003. [CP] G. Ciobanu, D. Paraschiv. P System Software Simulator. Fundamenta Informaticae vol.49, 61–66, 2002. [CTDHM] G. Ciobanu, B. Tanas˘ a, D. Dumitriu, D. Huzum, G. Moruz. Simulation and Prediction of T Cell Responses. Proceedings 3rd Conference on Systems Biology ICSB’02, 88–89, 2002.
339
Bibliography [1] E. Alba, M. Tomassini. Parallelism and Evolutionary Algorithms. IEEE Transactions on Evolutionary Computation vol.6, 443–462, 2002. [2] B. Alberts, A. Johnson, J. Lewis, M. Raff, K. Roberts, P. Walter. Molecular Biology of the Cell, 4th ed. Garland Science, New York, 2002. [3] A. Alhazov. P Systems Without Multiplicities of Symbol-Objects. Information Processing Letters vol.100, 124–129, 2006. [4] A. Alhazov, R. Freund, Y. Rogozhin. Computational Power of Symport/Antiport: History, Advances and Open Problems. Lecture Notes in Computer Science vol.3850, 1–30, 2006. [5] K.M. Anstreicher. Linear Programming in O([n3 / ln n]L) Operations. SIAM Journal on Optimization vol.9, 803–812, 1999. [6] H. Attiya, J. Welch. Distributed Computing: Fundamentals, Simulations and Advanced Topics, McGraw-Hill, 2000. [7] R. Barbuti, A. Maggiolo-Schettini, P. Milazzo, L. Tesei. Timed P Automata. Electronic Notes in Theoretical Computer Science, vol.227, 21–36, 2009. [8] R. Barbuti, A. Maggiolo-Schettini, P. Milazzo. Compositional Semantics and Behavioral Equivalences for P Systems. Theoretical Computer Science vol.395, 77–100, 2008. [9] F. Bernardini, V. Manca. P Systems with Boundary Rules. Lecture Notes in Computer Science vol.2597, 107–118, 2003. [10] I. Boneva, J.-M. Talbot. When Ambients Cannot be Opened. Lecture Notes in Computer Science vol.2620, 169–184, 2003. [11] P. Bottoni, C. Mart´ın-Vide, Gh. P˘ aun, G. Rozenberg. Membrane Systems with Promoters/Inhibitors. Acta Informatica vol.38, 695–720, 2002. [12] R. Brijder, M. Cavaliere, A. Riscos-N´ un ˜ez, G. Rozenberg, D. Sburlan. Membrane Systems with Proteins Embedded in Membranes. Theoretical Computer Science vol.404, 26–39, 2008. [13] N. Busi. On the Computational Power of the Mate/Bud/Drip Brane Calculus: Interleaving vs. Maximal Parallelism. Lecture Notes in Computer Science vol.3850, 144– 158, 2006. [14] N. Busi. Causality in Membrane Systems. Lecture Notes in Computer Science vol.4860, 160–171, 2007. [15] N. Busi, R. Gorrieri. On the Computational Power of Brane Calculi. 3rd Workshop on Computational Methods in Systems Biology, 106–117, 2005. [16] C.S. Calude, Gh. P˘ aun. Bio-Steps Beyond Turing. Biosystems, vol.77, 175–194, 2004. [17] L. Cardelli. Brane Calculi. Interactions of biolobical membranes. Lecture Notes in BioInformatics vol.3082, 257–278, Springer, 2004. [18] L. Cardelli, A. Gordon. Mobile Ambients. Lecture Notes in Computer Science vol.1378, 140–155, 1998. [19] L. Cardelli, Gh. P˘ aun. An Universality Result for a (Mem)Brane Calculus Based on Mate/Drip Operations. ESF Exploratory Workshop on Cellular Computing (Complexity Aspects), Sevilla, 75–94, 2005. 341
342
BIBLIOGRAPHY
[20] M. Cavaliere, V. Deufemia. Further Results on Time-Free P Systems. International Journal on Foundational Computer Science, vol.17, 69–89, 2006. [21] M. Cavaliere, R. Freund, A. Leitsch, Gh. P˘ aun. Event-Related Outputs of Computations in P Systems. Journal of Automata, Languages and Combinatorics, vol.11, 263–278, 2006. [22] M. Cavaliere, P. Leupold. Evolution and Observation-A New Way to Look at Membrane Systems. Lecture Notes in Computer Science vol.2933, 70–87, 2004. [23] M. Cavaliere, I. Mura. Experiments on the Reliability of Stochastic Spiking Neural P Systems. Natural Computing, vol.7(4), 453–470, 2008. [24] M. Cavaliere, D. Sburlan. Time-Independent P Systems. Lecture Notes in Computer Science vol.3365, 239–258, 2005. [25] M. Cavaliere, D. Sburlan. Time and Synchronization in Membrane Systems. Fundamenta Informaticae vol.64, 65–77, 2005. [26] M. Cavaliere, S. Sedwards. Membrane Systems with Peripherial Proteins: Transport and Evolution. Electronic Notes in Theoretical Computer Science vol.171, 37– 53, 2007. [27] M. Cavaliere, C. Zandron. Time-Driven Computations in P Systems. Proceedings of Fourth Brainstorming Week on Membrane Computing, 133–143, 2006. [28] M. Clavel, F. Dur´ an, S. Eker, P. Lincoln, N. Mart´ı-Oliet, J. Meseguer, J.F. Quesada. Maude: Specification and Programming in Rewriting Logic. Theoretical Computer Science vol.285, 187–243, 2002. [29] J.H. Conway. Universal Quadratic Forms and the Fifteen Theorem. In Quadratic Forms and Their Applications, Contemporary Mathematics vol.272, 23–26, 1999. [30] J.H. Conway, R.K. Guy. The Book of Numbers. Springer, 1996. [31] T. Cover, J. Thomas. Elements of Information Theory, Wiley, 2nd Ed., 2006. [32] E. Csuhaj-Varju, G. Vaszil. P Automata or Purely Communicating Accepting P Systems. Lecture Notes in Computer Science vol.2597, 219–233, 2002. [33] E. Csuhaj-Varj´ u, A. di Nola, Gh. P˘ aun, M. P´erez-Jim´emez, G. Vaszil. Editing Configurations of P Systems. Fundamenta Informaticae vol.82(1-2), 29–46, 2008. [34] V. Danos, C. Laneve. Core Formal Molecular Biology. Lecture Notes in Computer Science vol.2618, 302-318, 2003. [35] G.B. Dantzig. Linear Programming and Extensions, Princeton University Press, 1963. [36] J. Dassow, Gh. P˘ aun. Regulated Rewriting in Formal Language Theory. Springer Verlag, 1990. [37] G. Delzanno, L. Van Begin. On the Dynamics of PB Systems with Volatile Membranes. Proceedings of the Eight Workshop on Membrane Computing, 279–300, 2007. [38] A.E. Eiben, J.E. Smith. Introduction to Evolutionary Computing, Springer, 2002. [39] J. Esparza, M. Nielson. Decidability Issues for Petri Nets - A Survey. Journal of Informatik Processing and Cybernetics, vol.30, 143–160, 1994. [40] C.L. Forgy. RETE: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem. Artificial Intelligence vol.19, 17–37, 1982. [41] R. Freund, Gh. P˘ aun. On the Number of Nonterminals in Graph-Controlled, Programmed, and Matrix Grammars. Lecture Notes in Computer Science vol.2055, 214– 225, 2001. [42] R. Freund, S. Verlan. A Formal Framework for P Systems. Lecture Notes in Computer Science vol.4860, 271–284, 2007. [43] E. Friedman-Hill, Jess in Action: Rule-Based Systems in Java, Manning Publications, 2003. [44] M.R. Garey, D.S. Johnson. Computers and Intractability; A Guide to the Theory of NP-Completeness, Freeman, 1979. [45] J.S. Golan. Semirings and Their Applications, Kluwer Academic, 1999. [46] M. Hennessy. The Semantics of Programming Languages: An Elementary Introduction Using Structural Operational Semantics, Wiley, 1990.
BIBLIOGRAPHY
343
[47] F. Herrera, M. Lozano. Gradual Distributed Real-Coded Genetic Algorithms. IEEE Transactions on Evolutionary Computation vol.4, 43–63, 2002. [48] R. Horn, C. Johnson. Topics in Matrix Analysis, Cambridge University Press, 1994. [49] J.J. Hu, E.D. Goodman. The Hierarchical Fair Competition Model for Parallel Evolutionary Algorithms. Proceedings of Congress of Evolutionary Computation, IEEE Computer Society Press, 49–54, 2002. [50] G. Hutton. Programming in Haskell, Cambridge University Press, 2007. [51] O.H. Ibarra, A. P˘ aun. Computing Time in Computing with Cells. Lecture Notes In Computer Science vol.3892, 112–128, 2006. [52] K. Ireland, M. Rosen. A Classical Introduction to Modern Number Theory. Graduate Texts in Mathematics, Springer, 1990. [53] C.A. Janeway, P. Travers, M.Walport, M.J. Shlomchik. Immunobiology - The Immune System in Health and Disease, 5th Ed., Garland Publishing, 2001. [54] N.K. Jerne: The Immune System. Sci. Am. vol.229, 52–60, 1973. [55] G. Kahn. Natural Semantics. Lecture Notes in Computer Science vol.247, 22–37, 1987. [56] N. Karmarkar. A New Polynomial Time Algorithm for Linear Programming. Combinatorica vol.4, 373–395, 1984. [57] H. Kellerer, U. Pferschy, D. Pisinger. Knapsack Problems, Springer, 2004. [58] V. Ker¨ anen. Abelian Squares are Avoidable on 4 Letters. Lecture Notes in Computer Science vol.623, 41–52, 1992. [59] V. Ker¨ anen. New Abelian Square-Free Endomorphisms and a Powerful Substitution over 4 Letters. Proceedings WORDS, CIRM, 189-200, 2007. [60] L.G. Khachiyan. A Polynomial Algorithm for Linear Programming. Doklady Akademii Nauk USSR vol.244, 1093–1096, 1979. [61] J. Kleijn, M. Koutny, G. Rozenberg. Towards a Petri Net Semantics for Membrane Systems. Proceedings of WMC 2005, 439–460, 2005. [62] D.E. Knuth. The Art of Computer Programming, vol.4: Generating All Combinations and Partitions, 20–21, 2005. [63] S.N. Krishna. On the Efficiency of a Variant of P Systems with Mobile Membranes. Cellular Computing: Complexity Aspects, Fenix Editora, Sevilla, 237–246, 2005. [64] S.N. Krishna. The Power of Mobility: Four Membranes Suffice. Lecture Notes in Computer Science vol.3526, 242–251, 2005. [65] S.N. Krishna. Universality Results for P Systems Based on Brane Calculi Operations. Theoretical Computer Science vol.371, 83-105, 2007. [66] S.N. Krishna. Membrane Computing with Transport and Embedded Proteins. Theoretical Computer Science vol.410, 355–375, 2009. [67] S.N. Krishna, Gh. P˘ aun. P Systems with Mobile Membranes. Natural Computing vol.4(3), 255–274, 2005. [68] M. Kudlek, V. Mitrana. Closure Properties of Multiset Languages Families. Fundamenta Informaticae vol.49, 191–203, 2002. [69] C. Laneve, F. Tarissan. A Simple Calculus for Proteins and Cells. Workshop on Membrane Computing and Biologically Inspired Process Calculi, 147–163, 2006. [70] F. Levi, D. Sangiorgi. Controlling Interference in Ambients. Proceedings POPL’00, ACM Press, 352–364, 2000. [71] F. Levi, D. Sangiorgi. Mobile Safe Ambients. ACM TOPLAS, vol.25, 1–69, 2003. [72] H. Lodish, A. Berk, P. Matsudaira, C. Kaiser, M. Krieger, M. Scott, L. Zipursky, J. Darnell. Molecular Cell Biology, 6th Ed., Freeman, 2008. [73] D. MacKay. Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003. [74] E.W. Mayr. An Algorithm for the General Petri Net Reachability Problem. SIAM Journal of Computing, vol.13(3), 441–460, 1984. [75] M. Merro, F.Z. Nardelli. Behavioural Theory for Mobile Ambients. Journal of ACM vol.52(6), 961–1023, 2005.
344
BIBLIOGRAPHY
[76] R. Milner. Operational and Algebraic Semantics of Concurrent Processes. Handbook of Theoretical Computer Science vol.B, 1201–1242, Elsevier, 1990. [77] M. Minsky. Computation: Finite and Infinite Machines. Prentice Hall, 1967. [78] D. Molteni, C. Ferretti, G. Mauri. Frequency Membrane Systems. Computing and Informatics, vol.27(3), 467–479, 2008. [79] P. Mosses. Modular Structural Operational Semantics. BRICS RS 05-7, 2005. [80] H. Nagda, A. P˘ aun, A. Rodr´ıguez-Pat´ on. P Systems with Symport/Antiport and Time. Lecture Notes In Computer Science vol.4361, 463–476, 2006. [81] M.B. Nathanson. Elementary Methods in Number Theory. Graduate Texts in Mathematics, Springer, 2000. [82] H.R. Nielson, F. Nielson. Semantics with Applications: A Formal Introduction, Wiley, 1992. [83] T.Y. Nishida. An Application of P System: A New Algorithm for NP-complete Optimization Problems. Proceedings World Multi-Conference on Systems, Cybernetics and Informatics vol.V, 109–112, 2004. [84] M. Oswald. P Automata, PhD Thesis, Technical University Vienna, 2004. [85] P. Pacheco. Parallel Programming with MPI, Morgan Kaufmann, 1997. [86] C.H. Papadimitriou, K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity, Dover, 1998. [87] A. P˘ aun, Gh. P˘ aun. The Power of Communication: P Systems with Symport/Antiport. New Generation Computing vol.20, 295–306, 2002. [88] A. P˘ aun, A. Rodr´ıguez-Pat´ on. On Flip-Flop Membrane Systems with Proteins. Lecture Notes In Computer Science vol.4860, 414–427, 2007. [89] Gh. P˘ aun. Computing with Membranes. Journal of Computer and System Sciences vol.61(1), 108–143, 2000. [90] Gh. P˘ aun. Membrane Computing. An Introduction. Springer-Verlang, 2002. [91] Gh. P˘ aun. Membrane Computing and Brane Calculi (Some Personal Notes). Electronic Notes in Theoretical Computer Science, vol.171, 3–10, 2007. [92] Gh. P˘ aun, G. Rozenberg, A. Salomaa. DNA Computing - New Computing Paradigms, EATCS Text Series, Springer, 1998. [93] Gh. P˘ aun, Y. Suzuki, H. Tanaka. P Systems with Energy Accounting. International Journal of Computer Mathematics vol.78, 343–364, 2001. [94] A.S. Perelson, G. Weisbuch. Immunology for Physicists. Rev. Mod. Phys. vol.69, 1219– 1267, 1997. [95] I. Petre, L. Petre. Mobile Ambients and P Systems. Journal of Universal Computer Science, vol.5(9), 588–598, 1999. [96] M.J. P´erez–Jim´enez, A. Romero–Jim´enez, F. Sancho-Caparrini. Computationally Hard Problems Addressed Through P Systems. In Application of Membrane Computing, Springer, 313–346, 2006. [97] G. Plotkin. Structural Operational Semantics. Journal of Logic and Algebraic Programming vol.60, 17–140, 2004. Initially “A Structural Approach to Operational Semantics”, Technical Report DAIMI FN-19, Aarhus University, 1981. [98] G. Rozenberg, A. Salomaa. The Mathematical Theory of L Systems. Academic Press, 1980. [99] A. Salomaa. Formal Languages. Academic Press, 1973. [100] A. Salomaa. DNA Complementarity and Paradigms of Computing. Lecture Notes in Computer Science vol.2387, 3–17, 2002. [101] C.E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal vol.27, 379–423 and 623–656, 1948. [102] M. Tomassini. Parallel and Distributed Evolutionary Algorithms: A Review. Evolutionary Algorithms in Engineering and Computer Science, Wiley and Sons, 113–133, 1999.
BIBLIOGRAPHY
345
[103] L.R. Varshney, V.K. Goyal. Toward a Source Coding Theory for Sets. Proceedings Data Compression Conference (DCC), 13–22, 2006. [104] E.W. Weisstein et al. Pairing Function. MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/PairingFunction.html [105] Winskel, G. Event Structures. Lecture Notes in Computer Science vol.255, 325–392, 1987. [106] D.H. Wolpert, W.G. Macready. No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computing vol.1, 67–82, 1997.