Computation Orchestration: A Basis for Wide-Area Computing

Computation Orchestration: A Basis for Wide-Area Computing, Jayadev Misra and William Cook. JSSM 2006.

The widespread deployment of networked applications and adoption of the internet has fostered an environment in which many distributed services are available. There is great demand to automate business processe and workflows among organizations and individuals. Solutions to such problems require orchestration of concurrent and distributed services in the face of arbitrary delays and failures of components and communication.

We propose a novel approach, called Orc for orchestration, that supports a structured model of concurrent and distributed programming. This model assumes that basic services, like sequential computation and data manipulation, are implemented by primitive sites. Orc provides constructs to orchestrate the concurrent invocation of sites to achieve a goal -- while managing time-outs, priorities, and failure of sites or communication.

The idea of orchestration languages is one of the good ideas to come out of the web services world. Orc is an elegant little process-calculus-inspired programming language that illustrates and embodies the key ideas, and I'd recommend studying it to anyone who has even a passing inclination to design languages or libraries for "mashup" style programming.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Orc was previously

Orc was previously mentioned here on LTU, but it didn't generate much discussion last time around. Since Orc can express sequencing, I also mentioned using Orc to tame effects in a recent thread. Nobody commented on it there, but I'm not sure whether it was because nobody knew enough about Orc to comment, or because it was a completely silly question. :-)

I, for one, haven't explored

I, for one, haven't explored Orc, and glancing at the paper I didn't see why I should get excited. Maybe you can elaborate?

Aside from its simplicity, I

Aside from its simplicity, I find two features of Orc compelling:

  1. It's the only "process calculus" I know of with a termination construct (asymmetric where). Evaluation in other process calculi proceeds to completion. When dealing with limited resources on a real machine, early termination is essential (consider also misbehaving programs). This provides "Demonic Non-determinism" used to prune computation paths.
  2. Supports Angelic non-determinism by default, where all computation paths are explored unless the computation is explicitly pruned by the termination construct; this provides a form of totality in concurrent execution.

Coupling?

It's the only "process calculus" I know of with a termination construct (asymmetric where).

It may be useful to differentiate between orchestration and process calculi.

While "where" is convenient for simple cases (like orchestration of ticket reservation sites), I suspect coupling of pruning and synchronization, as exemplified by "where", can backfire in more complex situations (speculative computations in general).

Also, general speculative computations can benefit from explicit scheduling to express scheduling constructs richer than just "execute in parallel" (for example, biased parallel - schedule A only when B is blocked; multiply this by different choices for when the combined computation terminates - when A terminates, when B terminates, when either terminates, when both terminate - and you have too many combinators to make all of them primitive). Engines are one of the possibilities to enable "user-land" schedulers.

On the other hand, it's possible that orchestration does require all the power of speculative computations. Orc does not look adequate in this case.

Orc purpose

I think the most useful view of Orc is not as a programming language experiment. While there are plenty of interesting PL things to be done in its context, they are mostly in the implementation and not in the language features per se. Rather, Orc is most interesting as a process calculus that you can actually program in. Well, at least you will be when the aforementioned implementation gets rounded out by the unmentioned fellow graduate student. There is an alpha of sorts available for the adventurous.

As mentioned, there are some nifty ideas in Orc. Its combination of opererators (e.g. dualistic termination concept) is interesting. Its operational semantics is an example of expressive elegance, describing poweful concepts in a few simple rules. (I think they could be stated a lot more clearly, but I need to put up or shut up on that issue) Given the deceptive simplicity of the semantics, there are interesting things to be done with the language with non-typing formal methods, but it seems a bit outside the bailiwick of the standard LtU discussion.

So, if you've ever had the hankering to do some serious programming in, say, the Join Calculus, I'd encourage looking into the project. There are also connections to action systems following the Dijkstra tradition, up through Misra's Unity system, enabling reasoning about concurrent systems. This gives connections, for example, to formal methods that don't involve typing.

[disclaimer: I've been working on Orc variously for about a year now. While this qualifies my statements with a lot of familiarity, certain biases (both positive and negative) have certainly crept in]

So, if you've ever had the

So, if you've ever had the hankering to do some serious programming in, say, the Join Calculus, I'd encourage looking into the project.

How would you rate Orc against the Join Calculus? I've read an encoding of Orc in Join, but not vice versa. Join also provides distribution, whereas I imagine Orc must depend on a site to provide distribution.

The simple answer is that

The simple answer is that Orc is implicitly distributed. One of the open issues is how exactly to integrate an understanding of "local" sites (that are not distributed). I've read that paper, but I never did much with Join. I cited it as a more popular process calculus, as some of its concepts are making their way into C# (and Java, if JoinJava ever gets off the ground). My own work was with the Ambient Calculus. It had better distributuion control, but modeling the "wehre" termination required some hijinks.

Had I the time and expertise, I would have worked this out myself in more detail, but comparisons with process calculi are another nebulous area. The current theory is that Orc is more expressive than the Pi Calculus, but not more so than the fancier calcs like Join or Ambients. This is a "loose" concept, though, given that comparing certain calculus-specific concepts can be rough.

Again, the idea is that the particular setup of Orc allows full (and interesting) programmability without losing all the nifty process calculus analyzability. But I digress to the answer to Ehud's question...

Feel free to prod more if that didn't cover your question.

I'm only just starting on

I'm only just starting on process calculi. Join calculus looked promising, and I'm very familiar with Orc. I've skimmed some stuff on the Ambient Calculus, but I haven't gotten into it much. If you have any good introductory material, I'd appreciate a link. From what I've seen, there are many variations on Ambients, and I came across some interesting papers on adding resource control to the Ambient Calculus. I saved them for later perusal, as that's of particular interest. I think termination is also very important; what sorts of hijinks are required to support termination in Ambients and Join?

As for local sites in Orc, that can be information tracked in a type system, though I imagine that's more machinery than they're looking to add.

Orc expressiveness

The current theory is that Orc is more expressive than the Pi Calculus, but not more so than the fancier calcs like Join or Ambients.

Can you explain more here? The pi calculus is turing-complete and I haven't seen a turing completeness proof for Orc somewhere. Can you post a link if you are aware of such a proof?
Also, if I'm not mistaken, Fournet and Gonthier have proved that the Join calculus and the Pi calculus are equivalent.

Orc separates computation

Orc separates computation from orchestration, so I'm not sure that core-Orc is Turing complete (though it might be since it permits arbitrary recursion). I don't think it's hard to believe that it's Turing complete once you introduce some of the standard sites, like the integers.

Orc expressiveness

Perhaps I'm using it incorrectly, but I separate "expressiveness" from power in terms of theoretical equivalences. Sure, BrainFuck is Turing-equivalent, too, but that's practically meaningless in language comparisons. Though I can't provide a reference (the Pi-calculus to Orc comparison is unpublished and not my work), there is a substantially powerful termination concept in Orc that is hard to model in the Pi calculus. Specifically, when the "where" op terminates part of an expression, it drops it from the statement entirely. The pragmatics of that are left up in the air, but there is generally no such facility in the other calculi. I was able to cheat a bit in Ambients by moving that portion of the expression into a pseudo /dev/null box, "deleting" it by removing its ability to affect the environment.

So yes, there is some sense in which many of these calculi are "equivalent", but there comes a point at which a feature of one is only possible in another by recreating an interpreter for one in the other. E.g., that termination concept does not seem to have any kind of fundamental solution, hackneyed or not, in the Pi-calculus.

In the end, naasking is right. Orc is just a small transition system without adding some kind of semantics to its sites, such as queues or conditionals. I think the comparison to the Pi calculus required adding channels, but I can't find the paper right now.

An equivalent operation

Sure, BrainFuck is Turing-equivalent, too, but that's practically meaningless in language comparisons.

Perhaps more important in this case is the fact that it's not really clear that "Turing Equivalence" is all that useful a concept when discussing process calculi: the interaction, concurrency, and resultant non-determinism of process calculi take them outside the realm of pure Turing Machine computations. Indeed, that was the whole point of developing the process calculi.
Specifically, when the "where" op terminates part of an expression, it drops it from the statement entirely. The pragmatics of that are left up in the air, but there is generally no such facility in the other calculi.
I believe that the interrupt operator in CSP provides a somewhat similar capability to terminate an expression before it has completed. The semantics aren't quite the same as Orc's "where", but I'm pretty sure that you could produce a "where"-like effect using interrupt.

ditto

Well put-- definitely where I was trying to head, but you hit it spot on. I figured there was something simpler than my long-winded answer, so it's nice to have the LtU community to fall back on.

And thanks for pointing out the CSP link-- I'm really not familiar with it at all. I tend to generalize "process calculi" when I really mean "the PC's I've actually looked at". It is probably best that I avoid taking broad strokes with them entirely, as their designs and abilities range quite broadly.

Orc embedding for C#

I've just implemented an embedding of Orc in C# ala Finally Tagless, Staged Interpreters. It uses a continuation passing style reminiscent of strongly typed printf. Because all computation is driven from within Orc, I also had to add a Halt operation to terminate a branch. The only locking happens in the event queue, and this can be largely removed by maintaining per-thread event queues and load balancing ala nested schedulers in Manticore. Only proper termination for 'where' is missing from the implementation. I'll be adding that soon.

The implementation is surprisingly short (Interp.cs), but its use is somewhat more verbose in C# than it would be in a functional language. This is mainly due to the fact that C#/.NET has a non-uniform type system, where System.Void cannot be used as a generic type parameter, forcing the user to generate a number of wrapper delegates. The very limited type inference is also an impediment, but this should be alleviated in C# 3.0, and lambdas should help significantly as well.

Translating this into a useful OCaml/SML library should be trivial, particularly since they support custom infix operators. Combinators are very natural for functional languages, but a little awkward in object-oriented languages. The limitations of .NET's type constraints forced me into mimicking modules thus yielding an "ExpressionBuilder" pattern. Indeed, such awkwardness has led to the use of "builder patterns" before, and I find it amusing that object-oriented languages are forced to embrace functional structures, the end result often being the worst of both worlds.

In any case, I find it interesting that Orc's internal machinery is very close to E's event loops. The lazy evaluation of Orc expressions are equivalent to promises in E, and this is apparent in my C# embedding where calls and the "where" construct all pipe promises to subsequent stages. This corresponds to E's "send" semantics, whereas immediate/local calls that return non-promise values correspond to E's "call" semantics. The deterministic semantics of Orc is achieved in the same way as E's: internal events/computation is handled first, and all external events are handled in a subsequent round (rounds are called "turns" in E). This distinction is not represented in my embedding at the moment, but is the only other limitation I'm aware of.

On a historical note, Orc.NET is a project I started over a year ago soon after I joined LTU. Two years ago I knew practically nothing about languages, embeddings, process calculi, etc., and Orc.NET was on hiatus until I figured out how to do it. I think this marks my first public mini-language implementation (though I have a few private ones). Thanks to all LTU members for the helpful comments and insightful discussion over the past two years! :-)

As an orchestration/process calculus implementation, I think Orc.NET is equivalent in power to the Java ref_send library, which implements E's event loop API, though Orc.NET is less safe due to the lack of taming. It will be interesting to see if I can match its usability.

Embedding multiple interpreters in C#?

Hi! I'm quite interested in writing tagless interpreters in programming languages as widely used as C# and Java. Did you manage to somehow encode Orc expressions in C# so that the same expression can be interpreted in multiple ways? I am especially curious because C# and Java don't have generics over generics -- for example, "interface Monad" doesn't make sense. I looked at your code and couldn't find any polymorphism over interpreters.

I had started my initial

I had started my initial implementation using an IOrc interface, but ran into typing issues unrelated to the issue you pointed out, but I would have eventually run into your issue too. I have been considering how to further abstract the interpreter to implement a compiler though. I've described the encoding I used in a blog post. The encoding fixes the interpreter at compile-time, so it's monomorphic not polymorphic, though rebinding to a different interpreter is quite simple. Making the interpreter polymorphic using this encoding does seem to pose quite a challenge. I'll report here if I come up with something.

Tagless, mostly type-safe interpreters in C#

It's possible to define a tagless Symatnics that is polymorphic over interpreters and is type-safe for clients. I describe the somewhat elaborate technique here. We have to exploit a dynamic type, and an interpreter implementor has to be bit careful in his casting, but the interpreter user does not. It is usually quite clear what types are involved by visual inspection, so I don't think the casting is too onerous. Here's a sample to give you an idea based on the Finally, Tagless paper:

// The language being embedded polymorphic over interpreter I.
public static class Symantics<I>
    where I : struct, Interpreter<I>
{
    // A typed value that encapsulates a dynamic type. This compensates
    // for the lack of type constructor polymorphism.
    public sealed class V<T>
    {
        internal object state;
        internal V(T state)
        {
            this.state = state;
        }
    }
    public static V<bool> Bool(bool b)
    {
        return default(I).Bool(b);
    }
    public static V<int> Int(int i)
    {
        return default(I).Int(i);
    }
    public static V<bool> Leq(V<int> i1, V<int> i2)
    {
        return default(I).Leq(i1.state, i2.state);
    }
    public static V<Func<T, R>> Lam<T, R>(Func<V<T>, V<R>> f)
    {
        return default(I).Lam<T, R>(f);
    }
    public static V<R> App<T, R>(V<Func<T, R>> f, V<T> a)
    {
        return default(I).App<T, R>(f.state, a.state);
    }
    public static V<T> If<T>(V<bool> cond, Func<V<T>> then, Func<V<T>> _else)
    {
        return default(I).If<T>(cond.state, then, _else);
    }
}

// The interface describing the implementation of the above module.
public interface Interpreter<I>
    where I : struct, Interpreter<I>
{
    Symantics<I>.V<bool> Bool(bool b);
    Symantics<I>.V<int> Int(int i);
    Symantics<I>.V<bool> Leq(object i1, object i2);
    Symantics<I>.V<Func<T,R>> Lam<T,R>(Func<Symantics<I>.V<T>, Symantics<I>.V<R>> f);
    Symantics<I>.V<T> If<T>(object cond, Func<Symantics<I>.V<T>> then, Func<Symantics<I>.V<T>> _else);
    Symantics<I>.V<U> App<T, U>(object f, object a);
}

// This is a concrete implementation of the module interface. New interpreters
// need only provide an L_body implementation. Only L_body knows the internal
// structure of the types being manipulated, as L<I>.t<T> encapsulates an object.
public struct R : Interpreter<R>
{
    public Symantics<R>.V<bool> Bool(bool b)
    {
        return new Symantics<R>.V<bool>(b);
    }
    public Symantics<R>.V<int> Int(int i)
    {
        return new Symantics<R>.V<int>(i);
    }
    public Symantics<R>.V<bool> Leq(object i1, object i2)
    {
        return new Symantics<R>.V<bool>((int)i1 <= (int)i2);
    }
    public Symantics<R>.V<T> If<T>(object cond, Func<Symantics<R>.V<T>> then, Func<Symantics<R>.V<T>> _else)
    {
        return (bool)cond ? then() : _else();
    }
    public Symantics<R>.V<Func<T, U>> Lam<T, U>(Func<Symantics<R>.V<T>, Symantics<R>.V<U>> e)
    {
        Func<T,U> f = (x => (U)e(new Symantics<R>.V<T>(x)).state);
        return new Symantics<R>.V<Func<T,U>>(f);
    }
    public Symantics<R>.V<U> App<T, U>(object f, object a)
    {
        return new Symantics<R>.V<U>((f as Func<T, U>)((T)a));
    }
}

// construct some programs abstracted over interpreter I
static void mkProg<I>()
    where I : struct, Interpreter<I>
{
    var i = Symantics<I>.Int(1);                        // let i = 1
    var f = Symantics<I>.Lam<int,bool>(
        x => Symantics<I>.Leq(Symantics<I>.Int(0), x)); // let f x = 0 <= x
    var a = Symantics<I>.App(f, i);                     // let a = (f i), ie. returns true
    var k = Symantics<I>.If<int>(a,                     // let k = if a then i else -1, ie. k = i
        () => Symantics<I>.Int(-1),
        () => i);
}

static void Main(string[] args)
{
    mkProg<R>();
}

This embedding would be simpler if interfaces permitted static members to be defined as in Java, since Interpreter is just a set of static methods. I simulate this by forcing Interpreter implementations to be struct types, and then using the default() operator to construct one inline.

It should be clear that client programs are type-safe, but that Interpreter implementations have to be a little careful. Also, Lam accesses the internal state of L<I>.V, which means Interpreter implementations must reside in the same namespace. There's probably some better way to encode it, perhaps via subclassing, but this suffices as a proof of concept.

An even simpler technique is

A less safe, though much simpler technique is possible by branding the Symantics interface:

public abstract class Symantics<B>
{
    public abstract Exp<int,B> Int(int i);
    public abstract Exp<bool,B> Bool(bool b);
    public abstract Exp<Func<T, R>, B> Lam<T, R>(Func<Exp<T, B>, Exp<R, B>> body);
    public abstract Exp<R,B> App<T,R>(Exp<Func<T,R>,B> f, Exp<T,B> a);
    public abstract Exp<bool, B> Leq(Exp<int, B> i1, Exp<int, B> i2);
    public abstract Exp<T, B> If<T>(Exp<bool, B> cond, Func<Exp<T,B>> _then, Func<Exp<T,B>> _else);
}

public abstract class Exp<T, B>
{
}

sealed class Interp : Symantics<Interp>
{
    sealed class exp<T> : Exp<T,Interp>
    {
        internal T value;
        public exp(T value)
        {
            this.value = value;
        }
    }
    T V<T>(Exp<T, Interp> e)
    {
        return (e as exp<T>).value;
    }
    public override Exp<int, Interp> Int(int i)
    {
        return new exp<int>(i);
    }
    public override Exp<bool, Interp> Bool(bool b)
    {
        return new exp<bool>(b);
    }
    public override Exp<bool, Interp> Leq(Exp<int, Interp> i1, Exp<int, Interp> i2)
    {
        return new exp<bool>(V(i1) <= V(i2));
    }
    public override Exp<Func<T, R>, Interp> Lam<T, R>(Func<Exp<T, Interp>, Exp<R, Interp>> body)
    {
        return new exp<Func<T, R>>(t => V(body(new exp<T>(t))));
    }
    public override Exp<R, Interp> App<T, R>(Exp<Func<T, R>, Interp> f, Exp<T, Interp> a)
    {
        return new exp<R>(V(f)(V(a)));
    }
    public override Exp<T, Interp> If<T>(Exp<bool, Interp> cond, Func<Exp<T, Interp>> _then, Func<Exp<T, Interp>> _else)
    {
        return V(cond) ? _then() : _else();
    }
}

// construct some programs abstracted over interpreter I
static void mkProg<I>(Symantics<I> s)
{
    var i = s.Int(1);               // let i = 1
    var f = s.Lam<int, bool>(
        x => s.Leq(s.Int(0), x));   // let f x = 0 <= x
    var a = s.App(f, i);            // let a = (f i), ie. returns true
    var k = s.If(a,                 // let k = if a then i else -1, ie. k = i
        () => s.Int(-1),
        () => i);
}

static void Main(string[] args)
{
    mkProg(new Interp());
    Console.ReadLine();
}

Any client unsafety is a result of subclassing the Exp type, which is unusual since all real implementations would keep their Exp type private, and mixing these different Exp types wouldn't typecheck. The client would have to be actively try to violate the contract, in which case the resulting runtime exceptions are perfectly acceptable IMO.

This final tagless encoding

This final encoding has a more object-oriented, Smalltalk-like structure, instead of the more functional structure of the last post:

// constructors for the core language types, branded by the interpreter
public abstract class Ctor<B>
{
    public abstract Int<B> Int(int i);
    public abstract Bool<B> Bool(bool b);
    public abstract Lam<T, R, B> Lam<T, R>(Func<T, R> body)
        where T : Exp<B>
        where R : Exp<B>;
}

public abstract class Exp<B>
{
}
public abstract class Bool<B> : Exp<B>
{
    public abstract E If<E>(Func<E> _then, Func<E> _else)
        where E : Exp<B>;
}
public abstract class Int<B> : Exp<B>
{
    public abstract Bool<B> Leq(Int<B> i);
    public abstract Bool<B> Geq(Int<B> i);

    public static Bool<B> operator <=(Int<B> i1, Int<B> i2)
    {
        return i1.Leq(i2);
    }
    public static Bool<B> operator >=(Int<B> i1, Int<B> i2)
    {
        return i1.Geq(i2);
    }
}
public abstract class Lam<T, R, B> : Exp<B>
    where T : Exp<B>
    where R : Exp<B>
{
    public abstract R Apply(T t);
}

// *** Begin branded interpreter definitions ***

sealed class _bool : Bool<Interp>
{
    bool b;
    internal _bool(bool b)
    {
        this.b = b;
    }
    public override E If<E>(Func<E> _then, Func<E> _else)
    {
        return b ? _then() : _else();
    }
}
sealed class _int : Int<Interp>
{
    int i;
    internal _int(int i)
    {
        this.i = i;
    }

    public override Bool<Interp> Leq(Int<Interp> i)
    {
        return new _bool(this.i <= (i as _int).i);
    }
    public override Bool<Interp> Geq(Int<Interp> i)
    {
        return new _bool(this.i >= (i as _int).i);
    }
}
sealed class _lam<T, R> : Lam<T, R, Interp>
    where T : Exp<Interp>
    where R : Exp<Interp>
{
    Func<T, R> f;
    public _lam(Func<T, R> f)
    {
        this.f = f;
    }
    public override R Apply(T t)
    {
        return f(t);
    }
}

public sealed class Interp : Ctor<Interp>
{
    public override Int<Interp> Int(int i)
    {
        return new _int(i);
    }
    public override Bool<Interp> Bool(bool b)
    {
        return new _bool(b);
    }
    public override Lam<T, R, Interp> Lam<T, R>(Func<T, R> body)
    {
        return new _lam<T,R>(body);
    }
}

// construct some programs abstracted over brand I
static void mkProg<I>(Ctor<I> s)
{
    var i = s.Int(1);               // let i = 1
    var f = s.Lam<Int<I>, Bool<I>>(
        x => s.Int(0) <= x);        // let f x = 0 <= x (note that this uses an ordinary C# operator)
    var a = f.Apply(i);             // let a = (f i), ie. returns true
    var k = a.If(                   // let k = if a then i else -1, ie. k = i
        () => s.Int(-1),
        () => i);
}

static void Main(string[] args)
{
    mkProg(new Interp());
    Console.ReadLine();
}

Each type is implemented by a class, none of which are publicly accessible. Operations on those types are delegated to instances of that class.

Like the last encoding, the types for a given interpretation method are branded. This makes a program type-safe despite the casts, unless the client actually implements their own expression classes for one of the core types and tries to inject them into a program. In that case, runtime exceptions are perfectly acceptable IMO.

This is a slightly more natural C# embedding since you can use C# operators, method calls, etc. I came up with this structure while prototyping a mobile code idea inspired by the "FS Web Toolkit". The idea was that mobile programs could be written in such an embedded language in C#, then transparently executed either locally or remotely depending on the client's capabilities. For web applications, it could execute server-side for thin HTML-only clients (and thus generate the web page on the server), or client-side for rich JavaScript-capable clients. When translating to JavaScript, I think this requires an A-normal form translation, otherwise expressions are duplicated, and can become quite large. It's still a work in process. :-)

Given the deceptive

Given the deceptive simplicity of the semantics, there are interesting things to be done with the language with non-typing formal methods, but it seems a bit outside the bailiwick of the standard LtU discussion.

Can you elaborate? I am not sure what kind of things you have in mind.

The first things to look at

The first things to look at are in the paper, with the "Kleene" laws. I think it's an illustrative example of the power of a nice, clean semantics. My current project hopes to bring that understanding into the software engineering environment-- putting the semantics right at the fingertips of programmers. That is, directly proving program properties statically right in an IDE. Hopefully this will allow theorem proving and model checking to be much more highly automated and accessible. The "site" semantic should also enable really clean compositional reasoning.

The other nifty side is that we're finally working with semantics-first languages. That is, with the semantics already defined, any proof using it will be true for any implementation thereof.

Nothing entirely new, just in a different combination than has made the rounds before.

Bisimulation proof technique

The first things to look at are in the paper, with the "Kleene" laws. I think it's an illustrative example of the power of a nice, clean semantics.

Do you mean the operational and the trace semantics of Orc here? In my opinion, it is the bisimulation proof technique that makes the "Kleene" laws easy to prove, rather than the Orc semantics. Without bisimulation, one would probably have to do non-trivial manipulations of the trace sets to prove these laws.

elegance still rules

I don't see how bisimulation is relevant to the comparison of Orc with other calculi and programming languages. They all benefit from the technique as far as their formalizations can use it. It matters not to the end user that the underlying proofs came about with bisimulation or any other technique. Someone programming with a statically typed language does not generally care how the eggheads proved its soundness and completeness. Nor does bisimulation suddenly make gargantuan systems, like C++, suddenly trivial to reason about. Size still matters.

My experience with the system already shows bisimulation to be no end-all-be-all of proof techniques. Another version of the semantics is already facing proofs jumping enormously in size and complexity just by trying to introduce a concept of time. This is why I emphasize the simplicity of its semantics-- bisimulation may help everyone in making better proofs, but there will always be an advantage to dealing with the smallest definitions possible. It's the fundamental mathematical/logical principle of elegance. The additional problem for PL, unlike theoretical math, is that the system also needs to be usefully programmable, pushing the lower bound of complexity upwards.

what is the lower bound?

needs to be usefully programmable, pushing the lower bound of complexity upwards.

what is a useful lower bound? you could maybe be super small in the core, and then use macros to let people extend things - does that count? i'm curious what pragmatically people's experience leads them to list as the minimal set of features required for an actually useful language.

An expressive language

Similar thoughts have been discussed several times on LtU. A brief synopsis of my view and links to more involved discussions on LtU is here.

The upshot is that macros don't add expressiveness so the "small core" would already have to be very expressive. Scheme is an example of such a language. Certainly, though, macros could add a non-trivial amount of convenience over top that expressive core.

Re: as previously seen on LtU

thank you for the pointer!

interesting that PLT then apparently has the best of both worlds. now, if I could find the time to learn about their static checker? :-)

Can Orc tame effects? I'd

Can Orc tame effects? I'd say: not directly.

There's been a lot of research done on the semantics, and in the process we've come up with ways to inspect the underlying causal structure of an Orc expression, e.g. using event structures. I could imagine tools that use these structures to track (and tame) side effects, but we haven't developed any such tools yet at UT.

I suspect that we'll soon be addressing this question indirectly in the process of researching transactions in Orc, but it's too early to say for sure.