message, stream and protocol typing

Suppose you want to type-check a message protocol, or a stream grammar, or partially ordered datagram exchange. The type signature might be declared in advance, or you might be told at runtime during negotiation.

What kind of type declarations comprehensible to average practitioners can work well enough without a lot of training or priesthood snake-oil? I ask because sometimes I see type expertise languishing in what looks like a backwater from my view. I want to type process pipelines, stream flows, crypto exchanges, and connection protocols.

I only ask to see if interesting ideas come up. A side channel in rumination recently was about static analysis of what might happen if shell command lines were executed, given sufficient meta-info declarations about dataflow behavior. I thought a developer might be interested in tool support for analysis of what happens given components that interact with specified finite state machines and message formats, etc. You can elaborate the idea further from there.

New abstract creative reasoning is what I'd like most, rather than existing work -- which I would expect to be narrow, concrete, unimaginative, and grinding in application. Of course I don't mind being contradicted with beautiful work.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


I've mentioned an abstraction called Processes that I use to model imperative features in a purely functional way. I have a half-written paper on this, but I'm not going to let you fill my mind up with the details and knock any details of my logic out out cache. It sounds like it's a good match to your post, though, so I'll say something.

FYI. I remember that last time I used the word Process you were inclined to think about parallelism and concurrency, so let me state at the outset that everything I am about to say is completely single threaded deterministic (in principle).

The idea is that a Process accepts a message, changes state and sends a message. It is very handy to be able to organize the messages that a Process is sending and receiving into groups that can be hooked up and routed without knowing the particulars of the messages being exchanged. I call these groups "ports" so that you have a processes exchanging messages on various ports. So you can connect processes abstractly by hooking up ports and you can have subports.

So we have these ports with messages going back and forth. How do we type them? Well, let's define a Protocol to be a set of messages allowed in either direction plus a transition function that gives a new Protocol in response to each message. You can imagine very detailed protocols with lots of states that finely specify allowed messages or coarse grained protocols with a smaller number of states (maybe just one or two) that would be more appropriate at the type level. Finite state protocols are important for typing.

What can you do with protocols? One thing you can do with a protocol P is reverse the direction of all of the allowed messages. I call that -P. Hooking up processes? Well, you need a male (P) and a female (-P) port and then you can connect them.

Another thing can do is to translate all of the messages on a protocol to a subport. I've been writing that a*P for port a and protocol P.

We have the obvious sub-protocol relation, P1 < P2.

The Protocol of a Process (at top level) will always be an alternating protocol. That is, in the initial state all messages will go in and then in the next state all valid messages will be going out. But when you're talking about protocols on one port of many, this isn't true. We can have a situation where all messages on port a go from P1 to P2, all messages on b go from P2 to P3, and all messages on c go from P3 back to P1.

Processes can also easily model effectful procedures. We can have a procedure type like this:

foo : { st: ST Nat ~> ST String, io: IO } Char

I'm not committing to the notation, but basically this would say that foo is a Process that communicates on ports 'st' (for state) and 'io'. This says that port 'st' should be in protocol state ST Nat when flow enters the procedure (on special port K) and should be in state ST String when flow exits the procedure (on port K). [K means continuation -- I use it for calling processes as functions/methods] I think it will be useful to write IO for the before and after protocol of port 'io' rather than writing IO ~> IO.

I guess that's a reasonable overview. Ports should be dynamic. I remember having concerns with aliasing but I think I got them worked out ok. Oh, and Processes also make good models for mutable objects... OK. The end.

quick rewards will spoil me

Thanks, that was a much better response than I hoped, and your prose style is very clear and effective. The gist of an idea first works well, especially when motivated by a lucid order of concerns arising naturally from questions.

I need to read it several times though, trying to imagine plausible details. (I get really tired of going over my own past thought processes, so anyone else's train of thought is refreshing.)

At a top level, my main concerns are two-fold. The first is obvious: a tool should be able to give useful feedback, so a dev isn't left winging things by trial and error that never reaches closure when arrangements are open-ended. The second concern is conveying explicit organizational intent to readers: summarizing patterns of activity at a high level, itemizing things and relations, affording an overview. I want detail in reverse pyramid order, narrow point down to broad base. Trying to infer finite state machine graphs from reading implementation, for example, is an error-prone and unnecessarily complex chore.

Compacting detail down to an abstract blueprint doesn't happen as much as it ought in software, and where it does, people give up too easily at program boundaries. Distributed code can be too much like a mosh pit than choreography.

I mostly think about single threaded processes too, despite being logically concurrent (with parallelism unobservable). Process interaction is non-deterministic though, since message arrival is non-deterministic, even if individual processes hew to a rigid plan.

Protocols proper

I've been mainly focused on processes for use as an object/effect system, so I haven't been very focused on using protocols to oversee the exchange of messages between two machines on a network or something. I think this is definitely a use case, but I'm stretched thin. I agree with the point about tools being important. Looking at a state machine is much easier than reading one. I think the benefits here are reduced when the states are parameterized and you need to read anyway to understand the paramaterization.

I think a good type system can help with your second concern, because you can hide all of the states associated with certain "subprotocols" (used in a different sense than earlier). So you can see a high level state "send file" that elides all of the myriad sub-states involved in making that happen. I've worked through the details of an example something like this, but it's not fresh.

Process interaction is non-deterministic though

I'm telling you that this isn't the case with my Processes -- when Processes are combined in my framework the result is back and forth alternation of message exchanges. And so maybe you're telling me that Process is bad name for this abstraction. :)

process might be just the word

It's a good enough fit for the term. I'm used to words not quite fitting. (I expect divergence by default, striking some folks as odd.) The following made me flash on an old concept, but I don't mean grief by it.

because you can hide all of the states associated with certain "subprotocols" (used in a different sense than earlier). So you can see a high level state "send file" that elides all of the myriad sub-states involved in making that happen.

That totally gives me a DeMarco structured analysis flashback, which is either completely wrong, or too obvious to point out. Is nesting abstraction the gist?

Sometimes when pulling apart words, erasing one too many distinctions can suddenly devolve a narrative so everything seems equivalent to everything else, after deconstruction. After squinting, similar things blur together: sub-FSMs (finite state machines) and stack activation frames in object methods, when the objective is nesting. Niceties of timing distinction sometimes seem the only thing making a big difference in things like actors, threads, and finite state machines.

I like the idea of process as one highlighting potential to run out of sync like a coroutine, without any requirement that rendezvous occurs any more clearly than message ordering that acts as evidence. But your process might be more deterministic like a DeMarco system element.

P1 and P2

I'm a bit confused here. You use P1 and P2 to name protocols. Then, in the next paragraph, the same names are used for processes. Is this right?

a Process accepts a message, changes state, and then switches to a new state

I'm assuming a process sends some messages as well. :)

Our previous discussions on processes did help influence my AVMs. Mostly, I wanted easy support for transparent, purely functional unit testing of a graph of networked machines.

Brought to you by the letter P

I have processes, ports, protocols and procedures. All of my variables start with P. (Yes, those were supposed to be Processes).

a Process accepts a message, changes state, and then switches to a new state

Arg, thanks. My mind had moved on to the next sentence, apparently. Fixed.

Thanks for the AVM link. (And be sure to let me know when you have something I can play with!). I actually don't particularly like message passing either as an abstraction for building software. What I think is useful is the ability to collect a group of constructions, built by other means - particularly those that interact with state, and view it as a Process interacting with the world via message passing.

in other words?

I don't think it's the same,

I don't think it's the same, but wouldn't find it shocking to learn that it is subsumed by some system out there as the concepts involved are all very fundamental. I have the idea that using this abstraction for an object/effect system is at least a little novel, though my knowledge of prior art is related work is very spotty.

Sounds process-calculus-ish to me too

My first thought on reading your description was also that it sounded a whole lot like (a constrained) process calculus, with subtyping seeming a lot like some kind of CSP-style refinement relation. But maybe that's just my lack of understanding about what you're really proposing. That half-written paper sounds like it'd be an interesting read...

Not sure

I'm not very familiar with process calculi, but I've looked through it again and I don't see how to get the same kinds of constructions very easily. Aren't most process calculi focused on defining a specific (non-deterministic) process behavior rather than defining a set of admissible exchanges? I guess you can simulate a finite set of possibilities with non-determinism, but how would you encode this protocol in a process calculus?

St0: Send a Nat and go to St1.
St1: Receive a Nat and go to St0

Can you do that? (I don't know). A bigger question would be whether there are mechanisms for reasoning about Protocols abstractly through a distinguished set of states.

I haven't released my paper yet because there are issues with the type system that I feel are close to getting sorted out and I think that would make for a better read. I may try to spend some hours on it this weekend and then release something.


Perhaps it depends on the process calculus. The pi-calculus and its ilk are fairly low-level, and might require a lot of work to express a large range of possibilities. But something like LOTOS or CSP makes it fairly easy. For example, in CSP I might express your protocol as

    Protocol = St0
    St0 = |~| x:Nat @ x -> St1
    St1 = [] x:Nat @ x -> St0

which essentially says "Start in St0. Output may be any natural number (non-deterministic choice over Nat). Following which will accept any natural number." If you wanted to specify input and output ports you might say

    channel in, out : Nat
    Protocol = St0
    St0 = out!x:Nat -> St1
    St1 = in?x:Nat -> St0

which produces similar behavior to that above, but constrains outputs to occur on out and inputs to occur on in.

We might define a sub-protocol something like

    SubProtocol = subSt0
    subSt0 = out!x:{1..10} -> subSt1
    subSt1 = in?x:Nat -> subSt0

which defines a process that only outputs numbers in the range 1 to 10. Since the set of possible sequences of output and input events of SubProtocol is a subset of the set of Protocol sequences we can say that

     Protocol [T= SubProtocol

or that SubProtocol is a trace refinement of Protocol. It wasn't clear to me what the semantics of your sub-typing relationship were, but trace refinement would certainly be one candidate. There are also other, more constraining refinement relations, as well as even more tightly constrained relations like bisimulation.

Translating messages to subports can be accomplished through renaming. I'm not sure how to support your "reverse the direction of all messages" operation within the process calculus. But I suspect it could be easily achieved through a purely syntactic transformation. I'm also not sure exactly what you mean by "mechanisms for reasoning about Protocols abstractly through a distinguished set of states".


Thanks, pi calculus is the one I'm most familiar with (which isn't saying much). I've studied CSP before briefly, but LOTOS is new to me.

I agree there's a strong similarity between the examples you've given and how it would work in my system. I hope I can understand the relationship between the two, especially since I have concurrency on my eventual agenda.

The trick of splitting the input and output channels is interesting. Maybe it's even simplifying, but right now it seems to me to change the flavor of the result. One of the simplifying ideas for me was to stop focusing on the state of the process and to focus instead on the state of the protocol.

I have a general join operation with a type that's something like:

join: Process (Q + P) -> Process (R - P) -> Process (Q + R)

Do any of the process calculi allow you to quantify over protocols like this?

Translating messages to subports can be accomplished through renaming.

How would that work, though, if we're quantifying over protocols? For example, here's the type of a refl combinator that reflects incoming messages on port a to outgoing messages on port b, and vice versa:

refl: a -> b -> Process (a*P - b*P)

I had in mind that P is a subprotocol of Q if:

1. The allowed messages of P are a subset of those of Q, in the initial states.

2. For any message m allowed by P, the protocol of P in response to m is a subprotocol of the protocol of Q in response to m, recursively.


It's not entirely clear to me what your join type is intended to mean. What is the meaning of Process (Q + P)?

Ignoring the Q + P part, if the idea is to combine two processes in a way that "hooks up" P to -P such that those communications are accomplished in a way that isn't observable outside of the combined process, that seems like it could accomplished through a combination of parallel composition and name hiding. Something along the lines (in CSP syntax) of

Join(Proc1, Proc2, P) = (Proc1 [|P|] Proc2) \ P

This essentially creates a new process that is a composition of the the processes Proc1 and Proc2 such that those processes communicate via the set of messages in P, but those communications are not visible to the outside world. Of course, the composite process has a more complex state space than the individual processes.

I'm not entirely certain what the expression for refl is supposed to mean. But I don't believe that there is an operator that accomplishes the meaning of your '-' operation, which makes the more general refl process a little difficult to create. The problem being that the '-' operation seems like it would involve changing the protocol state machine such that "inputs" (in which the environment makes a choice about the value) become "outputs" (in which the process makes a choice about the value), and vice versa. I'm not sure how to accomplish such a global transformation of a state machine in a process algebraic context, unless it's done as a syntactic transformation on the process expression. Even then, I would imagine that the state transitions themselves would have to also change, if we're talking about protocols with more complex state-spaces than the simple two-state model we discussed earlier. I'd be interested to see how you handle it in your model.

(Process P) is the type of

(Process P) is the type of processes with protocol P.

The sum protocol P + Q is the protocol that allows either the messages on P or the messages on Q in the initial state. P and Q are required to be disjoint (different messages). If a message would transition P to P', then it transitions P + Q to P' + Q. And if it would transition Q to Q', then it transitions P + Q to P + Q'.

That type I wrote is wrong - you need to specify which port to join like in your CSP example:

join: a -> Process (Q + a*P) -> Process (R - a*P) -> Process (Q + R)

I'm writing P - Q as a shorthand for P + (-Q).

Even then, I would imagine that the state transitions themselves would have to also change, if we're talking about protocols with more complex state-spaces than the simple two-state model we discussed earlier.

I'm not positive that I'm catching your point, here, but I suspect that this gets back to my point about the advantage of focusing on Protocols rather than Processes. I'm not saying that there's a simple relationship between the types Process P and Process (-P), which might be where you'd expect to find a complex effect on state space.

It occurs to me that the subtyping relation you were expecting on protocols might be the relation induced by typing Processes, since that's where process calculus focuses. When is Process P < Process Q? In that case, you have to take into the direction of messages being sent: P should send fewer messages and receive more messages than Q in corresponding states. Is that what the [T= relation is?

I haven't done much cleanup work yet but will try to post something for you to look at soon.


It occurs to me that one thing that probably needed to be explained is that the type I've shown for join there isn't fully general. It's a way of joining processes that factor in a nice way which is a useful case for objects and effects.

P+Q possibly equivalent to P ||| Q

Hmmm... your description of P + Q sounds a lot like what CSP would call "interleaved parallel", and other process calculi might simply call "parallel". Given protocols P and Q, the kind of behavior you're describing could be written as:

plus(P, Q) = P ||| Q

where the operational semantics are essentially what you just described: if some message causes P --> P', then that same message causes P ||| Q --> P' ||| Q (and similarly for messages in Q).

I suspect we may be getting hung up a bit here on names. I am using CSP "processes" to represent your protocols. It's not really clear to me what you mean by a "process" that isn't captured by a "protocol", unless perhaps it's the fact that one "process" may incorporate several "protocols". Is there more to it than that?

I'm not positive that I'm catching your point, here, but I suspect that this gets back to my point about the advantage of focusing on Protocols rather than Processes.

To be honest, I'm not sure that the point I was trying to make was even valid. In working through a couple of quick examples, I couldn't see anything that exhibited the kind of problems I had in mind. I may have been getting myself confused thinking of things more complex than protocol state machines.

The [T= relation is a relation on the sequences of events (aka messages) produced by processes. It pays no attention to states. As a simple example, if I have a process P1 that can produce any sequence of interactions composed of messages a and b, and a process P2 that can produce only sequences of message a, then P1 [T= P2, since the set of all possible sequences produced by P2 is a subset of the set of all possible sequences produced by P1 (since P1 can produce sequences containing only a, but also sequences containing b as well). This sounds like it's slightly different than what you have in mind.

Sounds the same

My take at comparison of my system to a process calculus like CSP would be that process calculus describes a protocol globally. It explains the ordering of sends and receives by possibly many parties participating in the protocol over a set of channels. My protocols, on the other hand, are local, and they just describe the messages exchanged over a particular channel (which may have subchannels). A process is an entity that sends and receives messages, according to a protocol.

It seems at first glance that global protocol descriptions should be more general than the local protocol descriptions that I'm doing. For an encoding, maybe we could introduce labels L and R (left and right), and then label all subexpressions in the CSP expression according to which side of the protocol they're on. We would then require that all sends and receives specified should between oppositely labeled subexpressions. My negation would be modeled by switching all of the labels.

plus(P, Q) = P ||| Q

Yes, this sounds the same. One note about the relation of processes and protocols is that processes require alternating protocols. So if P and Q both allow receiving a message in their initial state, (P + Q) will allow a double receive (on P and then Q, say). But a process of type Process (P + Q) can't really support a double receive. A Process always receives and then sends, in alternation. So Process P is equivalent to Process |P| where |P| is the maximum alternating subprotocol of P.

linear types

Linear types are really useful for typing protocols and stream grammars and don't require conventional forms of structured programming.

Essentially, you model the protocol state in terms of tokens (values or objects) held by each participant. These tokens are given a linear types to ensure they are used rather than forgotten, and used at most once so you can control plurality of actions in the protocol. These tokens then model actions in the protocol. Upon performing an action with a token, you move to another state in the protocol by returning an appropriate linear object. Some tokens supply a choice of actions, but you can only select one.

Even without linear types, the sort of APIs you develop using linear types as the concept have a lot of nice properties: the protocol state is always visible as variables on the stack rather than hidden away in state machines, and every step has a clearly defined type that serves as executable documentation, and concurrent behaviors can be modeled by holding multiple tokens at one time.

The cost is that you typically need a lot more objects than a hidden state machine would require, which may cause a performance hit in systems not designed for fast allocation. And there are a lot more types, which isn't convenient to express in some languages.

still trying to get a handle on the value for me in linear types

At first linear types for typing protocols sounded really interesting. But so far I don't get it yet, after reading a variety of things that emphasize single access and memory safety more than message coordination. When I search specifically for linear types with messaging or protocols, the hits I find have way too much math in them (meaning I can never explain it to an average dev). It almost gives me a hypothesis that folks starting from a deeply typed functional language see linear types as a way to force protocol correctness when it would otherwise be hard to express order relationships.

But I really appreciate the reference because trying to force a new perspective to fit an objective helps me think creatively. (Failing to make progress causes phantom false starts in random lateral directions, which I find valuable.)

An efficient messaging runtime angle I have covered with immutable messages and copy-on-write, so the linear type benefit of memory safety there offers me no advantage as a management tactic for resources. I'm interested in value in expressing relationships as meta-info, when disparate languages will be used to generate cooperating processes that operate under different runtime regimes.

Session types

One recent-ish application of linear types to concurrency is a foundational reformulation of the previous work on "Session Types" as a Curry-Howard counterpart of linear logic. You will find more relevant references by looking for "session types" than "linear types" by themselves, because this interpretation is less straigthforward than the usual "applies to mutable memory as uniquely-owned data" one.

In particular, you could have a look at the slides of following tutorial as an introduction to this approach: From Linear Logic to Session-Typed Concurrent Programming, Frank Pfenning, joint work with Luís Caires, Bernardo Toninho, and Dennis Griffith, 2015.

protocol description language

I find good stuff when I search for "session types" and "protocol description language" (and later I will look at scribble).

appealing paper on session types

Sampling readings on session types, I ran across Sessions, from types to programming languages (Vasco T. Vasconcelos, last mod Jan 2011). Abstract:

We discuss session types independently of any programming language. We then embody the notion in languages from three diferent paradigms: the pi calculus, a functional language, and an object-oriented language.

The writing style strikes me as natural, direct, articulate, and to the point, with a focus on common sense application and similarity of purpose in different contexts. Basically, anything like a conversation is good enough context to apply, from protocols and messaging to object interfaces. There is some math, but the prose engages me more. Since I care little about syntax, I would draft things in Lisp notation first.

In the vast majority of applications, sessions types are associated to message passing, describing the messages flowing on communication channels, regardless of the host programming language (pi calculus, functional, or object-oriented). But this need not be the case. Session types have been used to describe the behavior of objects in component models [13], as well as attached to class definitions where they specify possible sequences of method calls [5].

I also like section two of Type checking a multithreaded functional language with session types.

Wadler's papers

Wadler has been working on session types as well, so you might take a look:

I haven't looked at them first-hand (in fact, what I've read was from Vasconcelos), but Wadler's an excellent writer, so heuristically I'd consider starting about topic X from him, for all X he worked on. In fact, I think he writes so well that his summaries of existing work (in standalone papers, or as part of other papers) are already a huge contribution to the world — and that's not to diminish his more technical contributions.

thanks for the recommendation

I'll keep my eyes peeled. So far Wadler papers I see present a calculus, but I don't find math interesting per se, and skimming doesn't grab me with prose that connects.

I find use of types interesting, but not algebra or calculus of types. If protocol systems are territory and type systems are map, then I'm interested in the territory and the relationship of map and territory, but the map itself is boring to me.

Patterns in territory pique my interest, and comparing their similarities, sometimes via comparing summaries in another notation. But notation itself doesn't interest me much beyond efficiency, flexibility, conciseness, clarity, etc.: practical effects of good notation. I like prose explaining why this map was good for that territory, and how you can keep re-using the map because it's general. But defending best practices in the accounting of internal map management detail is boring (to me), partly because it's way past the point of diminishing returns in reward for effort involved.

Some developers are obsessed with territory and hardly notice map, while other folks are obsessed with map and hardly notice marginal value in territory application. I wrote this whole thing to emphasize the map vs territory metaphor. Wins in cost/benefit caused by goodness in maps, and using maps in the first place, is what I enjoy. But I don't care about color, typography, and artistry in the map for its own sake.

Session types, continued

In addition to the link provided by gasche above, I think you should also have a deep look at Cédric Fournet's papers. I think they attack the questions you are interested in from a more practical angle.

linear logic for stories

The fundamental nature of linear typing is to control usage counts - e.g. require that a promise is fulfilled or a callback is performed or that a file-handle is closed, enforce minimal and maximal fan-ins or fan-outs when modeling hardware, model the various '?' and '+' and '*' patterns in regular expressions, require every point in a plan or plot is executed, etc.

Linear types are usually used for resource management or limit aliasing, so I understand the connotation. But that's just one use of many. Use of linear types only for aliasing and resource management is analogous to learning arithmetic and using it only for accounting.

An interesting perspective is use of linear logic for storytelling developed by Chris Martens at CMU. Linear logic is richer and less precisely manageable than linear types, but a lot of the ideas are applicable. You can tell sophisticated stories with linear types.

Linear logic != linear types???

Linear logic is richer and less precisely manageable than linear types

Aren't those the same/Curry-Howard analogues? Linear logic/types provides a set of connectives you can pick from, so maybe you've seen richer and weaker variants, but all connectives make also sense for both logic and types.
A good reference (that I've received here on LtU) is Arnaud Spiwack's presentation of System L

logic and types

While there are relationship between logics and type systems, I don't conflate them. For example, programming a logic language is typically a very different experience than programming a typed language or even programming in the type system. cf. Isomorphism is not enough.

I like Functional Descriptions of Mealy Machines

A Mealy machine takes some input, processes according to its current state, and each step produces output.

You can simply describe the type as:

type mealy = \i o => i -> (o, mealy i o)

Stating: if you apply this function to input then it'll output a response and a new Mealy machine. With currying, you can carry a state along in the function. If you want to prove two machines equivalent, it suffices to prove a bisimulation relation on that hidden state. Probably, with predicates on the input and output types you can prove other stuff correct, but I didn't think that through.

It isn't very difficult.

I can use Mealy machines in introductions to other systems

Cool, the Mealy machine reference is really useful to me, as a help in explanations that get progressively more complex. I like Wikipedia's formal description in old-school discrete math tuple style. (Discrete math was my favorite math class in college: both entertaining and useful.) It's only slightly more complex than an FSM with just an input tape, and demonstrates tracking more things during state transitions.

(I get a slight hardware focus from a presentation of Mealy machines, in the sense the output tape is the main observable result. In software, a lot more state can be around, especially when you have essentially an actor.)

I'm used to working with an FSM abstraction where instead of an output per transition, there is instead an action where the machine does something, which might include sending a message (maps well to the idea of output). Except there's no good way to diagram an action in general, since it can be an arbitrary stateful calculation. Output is a subset of action options.

Common useful practice also includes ignoring some of the input alphabet in some states, as if masked, so a formal model would best include a mask per state that says which input values cannot be received in that state (so they need to be dropped or buffered). A typical example is parking on a lock, after which nothing happens until success or failure of locking, which are the only two events understood then. (For example, async scatter-gather typically incurs a join later to impose order.)

I guess it would be useful for definitions to have public and private parts, to distinguish between states that can be observed, versus those that are entirely internal. The public parts correspond to messages you can send and receive. But internal private parts involve transitions involving state you cannot see, but never-the-less characterize what a machine is doing, for someone studying how it works.

I usually write docs that I wish could be machine-processed for typing purposes, but it's merely human-readable explanation in the form of ascii graphs and tables. Really small diagrams can summarize a lot of complex code, but if it happens to be wrong, the build system doesn't notice. This isn't a big problem in-the-small. But when you connect lots of pieces in a per-connection stack of components, where some are assembled dynamically from descriptions and negotiation, it's inherently unstable from lack of type analysis when uni-typed octet datagrams are the transport.

Well. You can lift this to IOA

If you start fiddling with the definitions you can lift Mealy machines to Buechi automata, to IOA automata which does what you want. It's just how you define the observable traces you get out of the machine by pushing a button. If you create a specialized type for IOA you can simply filter what comes out of the traces.

At least, I think you can do it that manner. Start of with Mealy machines and specialize towards certain traces and interactions. Not sure about parallel composition though.

But I always found the IOA formalism somewhat on the heavy side.