## Introduction to processes (Tony Hoare's CSP processes)

During the design of the programming language which allows formal verification I had to design concurrency features as well. In order to define concurrency properly in the setup of formal verification some form of process algebra is needed. The process algebra and statemenst about properities of processes had to be integrated into the programming language.

The following paper describes how I intend to integrate concurrency into the programming language. I used Tony Hoare's CSP model (communicating sequential processes) as the basic theory.

Since the design of the programming language is still an ongoing activity any comments are welcome.

## Comment viewing options

### Why Eiffel? Why CSP?

From your blog I see you want to make a modern Eiffel that supports, among others, CSP.

Eiffel has been a fantastic programming language that greatly influenced other developments. But would it not be time to move on? In "Vision for a modern Eiffel" you write:

The most recent one -- scala -- has nearly all key features of Eiffel except design by contract. But scala in its current form offers much more than Eiffel. It has type inference, strong support for functional programming, support for small scripts and large programs, complete type safety, concurrency etc. In short, Eiffel has lost the lead.
(â€¦)
The key feature design by contract is still unreached by other languages and opens the way for static verification. But the other key features supported by other languages must be introduced into Eiffel as well (a programmer using the advantages of type inference and functional programming of scala will never switch to Eiffel unless Eiffel offers at least the same and some other interesting features)

I would think that a simpler way to have all Scala's features and Eiffel's design-by-contract in a single language would be to add design-by-contract to Scala.
Is there a reason not to do so? Did you check out what Scala macros can do for this?

Similarly, Communicating Sequential Processes has been a beneficial and revolutionary theory, born in the seventies, just like CCS. But again it is time to move on. The Algebra of Communicating processes (ACP) is a more solid algebraic approach.
CSP has a rather strange distinction between deterministic choice and non-deterministic choice, which is not present in CCS and ACP.
Both CSP and CCS offer action prefix multiplication as sequential composition. So if a is an action and x, y are processes, then aÂ·x is a process, but xÂ·y is not. This is an unnecessary limitation, as shown by Baeten. In ACP xÂ·y is also a process.

### ACP primer

Would you recommend any particular starting paper on ACP (assuming only casual knowledge of CSP, Kahn processes, etc.)?

### Start from Wikipedia

I still haven't found a good starting paper on ACP myself. The Wikipedia page on ACP is a fair place to start. There you find links to a few open access papers.

If you want to spend a few hundred dollars then consider Handbook of Process Algebra. Beware that it names the neutral elements as 0 and 1. These used to be Î´ (deadlock process) and Îµ (empty process) in earlier publications.

Typical research papers are for instance in the ACP95 Workshop proceedings. I had a paper published therein (starting on page 229), on a simulation of a model factory using a C++/ACP based language. That was kind of followup of this specification in PSF (an ACP based formalism), which is good to compare.

You can download and try a branch of the Scala toolset that includes my current Scala/ACP based language; directions are at www.subscript-lang.org. This should give you a good impression of the power of programming using ACP. Implementation is half way now; the "C" of "ACP" has not been done yet. Moreover, installing the Scala toolset on Windows is currently hard, due to path string problems. On Linux and Mac it installs fine. There you can already program nice GUI controllers in ACP style.

Note

In the early decades of ACP the so called "silent step" Ï„ had been studied a lot, together with Ï„-abstraction, i.e. rewriting some actions to Ï„. Now the following holds:

Ï„ = Îµ + Î·

i.e. Ï„ is a choice between the empty process Îµ and the hidden step Î· (the thing that I meant when I wrote "silent action i" elsewhere on this page).
So this Ï„ is not that fundamental. It is better IMO to study Î·-abstraction, since the Îµ component in Ï„-abstraction makes little sense. And I personally never felt any need for such abstractions when programming applications in my ACP based languages.
The upshot: I'd recommend to skip all discussions of Ï„ and Ï„-abstraction that you encounter in papers.

### Scala/Eiffel, ACP/CSP

Hi Andre,

1. Syntax: For me syntax is a matter of taste. We have currently languages with algol/pascal style syntax and languages with C like syntax (Csharp, scala, java, etc.). I don't want to spend a lot of time on syntax. The language I am designing can be put into algol/pascal like syntax or into C like syntax. The main difficulty is not syntax, its sematics. I continue with algol/pascal like syntax like Eiffel, because it is easy (for me) to integrate the semantics. But I appreciate that the world is dominated by languages with C like syntax. Therefore it might be possible to offer a parser for C like syntax and a parser for algol/pascal like syntax.

2. ACP/CSP: There are a lot of process algebras available. From my point of view the differences are not fundamental but cosmetic. The difference between deterministic and nondeterministic choice is important and is available in most process algebras like e.g. CCS (it is called tau transition for nondetermistic choice there). I am not very familiar with ACP but if it is a complete process algebra it certainly has this distinction and maybe has given it a different name. The plus of CSP above all process algebras is that Tony Hoare has integrated proof rules which are indispensable for software verification. Therefore I have used the theory of CSP as a basis. And concerning your last point: This is again syntax. In ACP you say that x.y is a process, in CSP you write x->y->skip. If x.y is the preferable syntax I have no problem with that.

### Syntax and ACP/CSP

1. Syntax

Scala's designer Martin Odersky worked as a PhD student with Niklaus Wirth, on Modula-2 and Oberon. This has influenced Scala; it has a wirthian notation for typing. E.g., the following declaration is valid in both Pascal and Scala:
 var s: String
Scala has the same syntax for if statements and while statements as C and Java, but that is about to change. Odersky has a proposed a plan to makes these Pascal-like in the near future.

Scala's type parameters are inside rectangular brackets, just like in Eiffel.

The main remaining C-ish syntax in Scala is AFAIK:

• braces for block statements: {â€¦}, rather than beginâ€¦end
• assignment and equality operators == = != rather than := = <>
• boolean operators && || ! rather than and or not

If you think it would be worthwhile to have Scala's features with Pascal syntax for block statements, assignments etc, then it would be really easy to download the Scala compiler project from GitHub and apply the changes.
If you take the other route, i.e. add Scala's features in an Eiffel compiler, then that would be a some orders of magnitude more work.

2. ACP/CSP

You write: "The difference between deterministic and nondeterministic choice is important". ACP does not require two distinct operators for choice. Why would it be important to have two? Can you give an example specification that requires two distinct choice operators, and that cannot be expressed in ACP?

You write "The plus of CSP above all process algebras is that Tony Hoare has integrated proof rules which are indispensable for software verification". AFAIK such rules are also known for ACP, see for instance "Process Algebra with Guards: Combining Hoare Logic with Process Algebra" by Groote and Ponse. If you can point out that the rules for CSP are more complete then I would like to hear from you.

About the action prefixing and sequential composition: I expressed myself wrong. What I should have written:

If a, b are atomic actions and x, y are processes, then
a->x
is a process in CSP (like aÂ·x in CCS), and so is
a->x + b->y
(The official CSP notation is a square instead of the plus). However, the a is not a process in CSP and CCS. So
(a+b)->x
is not a valid process expression; that restriction is hard to explain to a programmer.
CSP has also a semicolon as a sequential operator for processes but this is only more unnecessary syntax. As Baeten showed, atomic actions may well be treated as processes.

### ACP: communication and interleaving

I certainly have to look more into ACP. It seems to have some interesting properties. But I am not sure whether ACP can be transformed directly into a practical programming language. One problem I see is modelling communications between processes. E.g. Lets assume we have two processes p and q which have some communication actions in common (i.e. they have to synchronize on these actions) and some other actions which they can execute independently. In CSP I can write p||q to describe the process where all communication actions between p and q are synchronized and all other actions are arbitrarily interleaved.

In ACP I have not found a corresponding operator. I can write in ACP the expression p|q which means that all actions between p and q must be synchronized or p||q which means that the actions are arbitrarily interleaved and ACP has the intermediate left merge which does not correspond to Hoare's || operator either.

Can you help me on this issue?

### ACP-style Communication is OK

My hypothesis:
Any practical CSP or CCS specification may be expressed at leased as easily in ACP plus some syntactic sugar.

In ACP the parallel operator || means interleaving parallelism with the option of communication between the operands. The operator || is defined in terms of two auxiliary operators that you do not normally use explicitly in specifications:

p||q = pâ•™q + qâ•™p + p|q

• pâ•™q means: first an atomic action in p starts, and then the rest of p happens in parallel with q
• p|q means: first p and q do a communication action together, and then the rest of p and q happen in parallel

It is well possible to extend programming languages with ACP. See SubScript, a Scala extension that I am working on. I still need to implement communication; I know it can be done for I have done something similar in a predecessor language.

### ACP: communication and interleaving

let me know if you encounter a possibility to model CSP communication (i.e. synchronization on common events and interleaving on the other events) in ACP. The primitive operations in ACP like p||q, p|q and left (right) merge are definitely something different as you discribed in the last post as well.

### Correctness of internal and external choice

I'm not familiar with ACP, CSP, or CCS, but the difference between internal ("nondeterministic") and external ("deterministic") choice matters for specifying correctness criteria for individual parts of a system. If we program a subsystem so that it does "X," then we can also claim that it does "our choice of X or Y," but we can't claim it does "environment's choice of X or Y."

It sounds like someone working in ACP might be able to deal with the distinction using a metalogical approach (and perhaps sacrificing the "in ACP" part of your hypothesis): Assign an index variable to each "X or Y" choice, and then use set theory or type theory's universal and existential quantification to reason about the interactions of these choices. Does this seem like a drawback or a plus? I don't think I can evaluate the practicality of this myself at this point.

### I never understood

why CSP requires two kinds of choice operators. It seems unnecessary to me. Suppose we have in ACP an external ("deterministic") choice:
x + y
Then we can turn this effectively into an internal ("nondeterministic") choice by letting both alternatives start with an internal action named i:
i x + i y
When programming with ACP you would not write such specifications, except for deliberate random behavior.

### Choice in CSP

The nondeterministic choice operator in CSP can be thought of as syntactic sugar for exactly the construction you describe. Indeed, the FDR2 refinement checker that came out of the Oxford CSP research group uses a hidden action named "tau" in its internal representation of nondeterministic choice.

Is such an operator strictly necessary from a theoretical point of view? Not really. But the development of CSP, unlike that of CCS and ACP, has often focused more on pragmatism than on finding the smallest possible set of operators. Nondeterministic choice turns out to be useful for some modelling tasks, as well as for defining abstract specifications that leave the implementer with some freedom to decide what to do, so having a compact syntax to represent it is helpful.

### You confirm that ACP has nondeterministic choice

whether you write "i x + i y" or "x /\ y" (using /\ as an ascii representation of Tony Hoare's nondeterministic choice) is for me just a syntactic difference. In CCS you use the "tau" as an internal event.

That confirms my previous point. Any serious process algebra must have a means to express nondeterministic choice. Whether as internal event or as a special operator is just syntactic sugar. From the syntactic point of view I agree completely with you: There is no absolute necessity of a separate nondeterministic choice operator.

But having this operator has some advantages. You can express nondeterminism without "inventing" internal events. As some others (Ross, Allan) already remarked on this thread this gives you expressive power for specifications. E.g. for the specification of a vending machine which returns change money it is completely unimportant which coins and bills it uses to give the change. The only important thing is that it gives the correct sum. The implementor has then the freedom to use the available coins and bills in the machine to return the correct change money. Of course you can use internal events to express such a specification. But I suspect that such a specification is less readable and less clear than a specification which uses the nondeterministic choice operator.

### deterministic process models

Non-deterministic choice or interleave can be convenient. And it is part of many process algebras. But I'm not convinced it's essential.

For specifications, if we want to say "it needs the correct sum", it would be always clearer to specify that directly rather than represent it as a non-deterministic choice of coinage. If we start using the /\ operator, for example, it quickly becomes difficult for a human to validate that both sides of the operator provide the same change, much less correct change. In any case, specifications should be described for interactions with abstract processes, not for a concrete process represented with non-deterministic choices.

For runtime behavior, a vending machine can easily provide change as a deterministic function of what change is available and how much is needed.

And we don't need non-deterministic process interactions or interleave. The reason non-determinism is convenient is to ensure a commutativity property (p||q = q||p). But it is not unreasonable to seek commutativity by other means (e.g. temporal semantics), or to abandon it.

Non-determinism is troublesome for a lot of reasons that I shouldn't need to mention here. Why should non-determinism be pervasive effect in a process algebra? Why call it a "must have" for "any serious process algebra"?

### Think modeling, not implementation

Reasons to have the ability to represent nondeterminism in a process algebra (either through ACP's approach, or via syntactic sugar):

• To permit modeling of existing processes that have nondeterministic behavior (e.g. an external physical process that your software may interact with, but cannot control)
• To permit modeling of processes that have a range of acceptable behaviors (something that "may" happen but doesn't "have to" happen)
• To permit specification of systems that can be implemented in several possible ways (again several acceptable behaviors). The vending machine change example given above was possibly poorly stated. I agree that it would be better to specify what constitutes "correct change" if all you care about is whether the right change is delivered. However, consider that the same coins may be delivered in different sequences. If you are modeling at (and therefore care about) the level of individual coin-delivery actions then all sequences lead to the "correct change", and the set of possible behaviors could be expressed as a nondeterministic choice over delivery sequences (provided you don't actually care about the order of the coins, or permit more than one possible sequence). Obviously if we're not treating individual coin-delivery as an observable action then this becomes irrelevant. But I'm sure you can think of other examples of situations in which more than one possible sequence of observable actions is acceptable.

Bill Roscoe's Theory and Practice of Concurrency has several examples of nondeterministic choice as a useful tool for expressing abstract specifications of behavior.

### Inventing internal events?

You'd prefer nondeterminism without "inventing" internal events. In ACP these internal events do not need to be invented: they are already there, called actions, which are basic operands for expressions. A special action would be a silent action that I named "i".

ACP has only two basic composition operators: addition (= choice) and multiplication (=sequence). All other composition operators should be defined in terms of these two. This would be easy for nondeterministic choice, e.g. by

x /\ y = i x + i y

But this would not be enough when you add other composition operators such as disruption. The most significant axiom for a disrupt operator "/" would be:

aÂ·x / y = aÂ·(x/y) + y

So a deterministic choice comes up, making this operator "/" a deterministic disruption. Now if there would be an alternative operator for nondeterministic choice, we also should have likewise (for sake of equal treatment) an alternative operator for nondeterministic disruption. We would need another symbol and another set of defining axioms. And this is just the disruption feature; there are also interruption and many other useful compositions that should come with double versions.

Besides, this nondeterminism is often not needed when writing specifications, in my experience, and certainly most often not needed when writing programs.

tl;dr: I think special syntax for nondeterministic choice does not add value.

### YMMV

tl;dr: I think special syntax for nondeterministic choice does not add value.

You're welcome to your opinion, of course. My experience has differed from yours. I've found nondeterministic behavior sufficiently useful in modeling that having special syntax is quite helpful.

### ACP parallelism is communication & interleaving

Hi Helmut,

I think you are mistaken. ACP parallelism does synchronisation on communicating actions, and interleaving on other actions. It is a bit different in the details from CSP and CCS, but for practical specifications ACP is as least as convenient as the other two.

E.g. suppose a,b,c,d are normal actions, and s and r are actions that communicate with one another (yielding sr). Then the parallel composition
a b s || c r d
will result in one of the following traces
a b c sr d
a c b sr d
c a b sr d

If you have a practical use case for which you think CSP offers a good solution while ACP does not, then please let me know.

### ACP: communication and interleaving

Up to now I have understood the operator || in ACP differently.

From the papers I have read about ACP I usually get the definition
 x||y = x||_y + y||_x + x|y which from my interpretation allows arbitrary interleaving or communication.

I admit that I am not an expert in ACP and I would like your interpretation. But the available papers on the internet tell a different story. Can you give me a reference to ACP where the communication is explained in the manner you have written above?

### ACP Communication operator

x|y means: first x and y start communicating, and then the rest of x and y happen in parallel. See for instance this axiom:

aÂ·x|bÂ·y = (a|b)Â·(x||y)

So ACP parallelism offers a mix of interleaving and communication, just as you would need.

### But x||y can be x|y or x||_y

But x||y can be x|y or x||_y or y||_x. The first alternative stands for the communication which I would like to see in this case. But the second and third alternative allow that the first events/actions of x or y happen independently and arbitrarily interleaved. The definition of left merge ||_ does not make any distinction between communication actions and other actions. So we can have synchronized communication or not. This is my interpretation of the '+' operator in ACP. Did I get something wrong here?

If yes, please give me a reference to a clear description of ACP which is consistent with your interpretation (which I would prefer).

### The left merge operator allows for communication

The left-merge operator says that left hand operand starts with an action and then the rest of it happens in parallel with the right hand operand; this parallelism includes again the possibility of communication.
Take for example aÂ·s||bÂ·r with a,b normal actions, and s,r actions that communicate with one another, yielding c. Apparently this process should reduce to aÂ·bÂ·c + bÂ·aÂ·c

How does this work? Apply the axioms from the Wikipedia page on ACP as rewrite rules:
 aÂ·s||bÂ·r

 = aÂ·sâ•™bÂ·r + bÂ·râ•™aÂ·s + aÂ·s|bÂ·r = aÂ·(s||bÂ·r) + bÂ·(r||aÂ·s) + (a|b)Â·(s||r) = aÂ·(sâ•™bÂ·r + bÂ·râ•™s + (s|b)Â·r) + bÂ·(râ•™aÂ·s + aÂ·sâ•™b + (r|a)Â·s) + 0Â·(s||r) = aÂ·(sâ•™bÂ·r + bÂ·râ•™s + 0Â·r) + bÂ·(râ•™aÂ·s + aÂ·sâ•™b + 0Â·s) + 0 = aÂ·(sÂ·bÂ·r + bÂ·(r||s) + 0) + bÂ·(rÂ·aÂ·s + aÂ·(s||r) + 0) = aÂ·(sÂ·bÂ·r + bÂ·(râ•™s+sâ•™r+r|s)) + bÂ·(rÂ·aÂ·s + aÂ·(sâ•™r+râ•™s+s|r)) 

= aÂ·(sÂ·bÂ·r + bÂ·(rÂ·s+sÂ·r+c)) + bÂ·(rÂ·aÂ·s + aÂ·(sÂ·r+rÂ·s+c)) 

After this rewriting, the r and s actions are hidden, i.e. rewritten to 0, so that the following remains:
 aÂ·(0Â·bÂ·0 + bÂ·(0Â·0+0Â·0+c)) + bÂ·(0Â·aÂ·0 + aÂ·(0Â·0+0Â·0+c))

 

= aÂ·bÂ·c + bÂ·aÂ·c 
In this comment I gave some literature references to ACP.

### The first rewritings you

The first rewritings you have made are consistent with the axioms of ACP which I found in the literature. However I have not found a rule which allows you to hide (i.e. rewrite to 0) the r and s actions. Why is it possible (i.e. based on which rule) to rewrite a*s*b*r into a*0*b*0?

### Hiding in ACP

Hiding is done using the encapsulation operator Î´H with H={r,s} (in fact it is "dauH" but I cannot find the dau symbol). You would enclose the process specification inside Î´H(...). See the Wikipedia page on ACP and various papers.

SubScript does communication a bit differently; no hiding is needed (and neither possible). Communication is done using "shared scripts"; the special case of 1 involved communicating partner is normal script refinement. More than 2 partners is equally possible. Also the body of a communication is not restricted to be an atomic action.

### AndrÃ©, thanks for the hint

AndrÃ©, thanks for the hint with the encapsulation operator.

So in ACP the equality

    a*s||b*r  = a*b*c + b*a*c


is not valid. The correct equality is

    dau_{r,s}(a*s||b*r) = a*b*c + b*a*c


From my point of view this seems to be a rather complicated way to express communication.

### It is not that bad

In practice ACP specifications do not really suffer from the obligation to encapsulate, IMO. But doing without would be a bit easier. Fortunately the encapsulation is not really needed: SubScript treats communication as a "multi caller-single callee" refinement.