## Don't use "Yield" for co-routines; instead use "Postpone"

A pattern followed “↑” and a type can be used to match the pattern with something which has been extended from the type. For example,


PostponedFringe.[aTree:Tree]:FutureList◅String▻  ≡
aTree �
aLeaf↑Leaf ⦂ [aLeaf.getString[ ]] ⍌
aFork↑Fork ⦂ [⩛Fringe.[aFork.getLeft[ ]],  ⩛Postpone Fringe.[aFork.getRight[ ]]]▮


For example, ["The", "boy"]▮ is equivalent to the following:
PostponedFringe.[Fork.[Leaf.["The"], Leaf.["boy"]]]▮

The procedure Fringe can be used to define SameFringe? that determines if two trees have the same fringe [Hewitt 1972]:

   SameFringe?.[aTree:Tree, anotherTree:Tree]:Boolean ≡  //  test if two trees have the same fringe
PostponedFringe.[aTree] = PostponedFringe.[anotherTree]▮


For example, [False, True]▮ is equivalent to the following:

Let aList ← PostponedFringe.[Fork.[Leaf.["The"], Leaf.["boy"]]]｡    // aList = ["The", "boy"]
[SameFringe?.[aList, ["The", "girl"],              // False
SameFringe?.[aList, ["The", "boy"]]]▮           // True


Note that Leaf can be defined as an extension of Tree:

Actor Leaf.[aString:String] extension Tree using
getString[ ]:String → aString▮


and Fork can be defined as an extension of Tree:

Actor Fork.[aLeft:Tree, aRight:Tree] extension Tree using
getLeft[ ]:Tree → aLeft,
getRight[ ]:Tree → aRight▮


Edited for clarity.

## Comment viewing options

### Where are the coroutines?

Where are the coroutines in the example you gave?

### re Where are the coroutines

Where are the coroutines ...

Where is the confusion?

The code eagerly recurses down the left branches of a tree, spawning coroutines for right branches along the way.

In (e.g.) Python you could write superficially similar code in the form of a recursive generator that uses yield. The actor version returns a list of definite type. The Python version involves non-referentially transparent generator functions.

### Things I understand and like about coroutines, I don't see here

The code eagerly recurses down the left branches of a tree, spawning coroutines for right branches along the way.

Those coroutines look like lazy thunks to me. You might say these thunks represent continuations of coroutines in progress, but to justify that point of view, what did the computation look like that led up to and yielded these continuations? Maybe there's no answer involving such a "yield" (even an implicit one), but if that's true, are there really coroutines here in any form?

Hewitt asked Keean "What does code for LazyFringe and SameFringe? look like using Yield?" I see this is a very open-ended question, akin to asking "What does this code look like in an imperative style?" It can look like a lot of things depending on what the postpone example was intended to do and how the yield programmer chooses to factor their code.

Where is the confusion?

The title of the thread is about using postpone instead of yield.

If you asked me out of the blue, "What does yield add to a language?" I'd think about Lua-style coroutines, and I'd say effect handling. That is, it lets us run certain subcomputations with an ambient ability to invoke an additional, custom side effect (called yield), even if the surrounding computation is pure.

If you specified that you were only talking about the limited kind of yield offered in Python or ECMAScript, then I'd say it provides a way to avoid the indentation that arises from continuation-passing style.

I don't see how "postpone" is supposed to accomplish local impurity, and I don't even see CPS indentation reduction. Partly that's because I don't know -- even Hewitt doesn't know -- what the yield version of the example would be. What definition of coroutine do I need in order to see a coroutine in this example, and what is that kind of coroutine good for?

### re Things I understand and like about coroutines, I don't see he

Those coroutines look like lazy thunks to me.

They generalize thunks by abstracting away inessential details about flow control, allowing these "thunks" to be activated concurrently if desired.

Hewitt asked Keean "What does code for LazyFringe and SameFringe? look like using Yield?" I see this is a very open-ended question

Well, Keean produced a much longer definition that introduces an auxiliary stack and a stateful loop and which doesn't define an actual lazy list.

On the maths side, it's inelegant to say we need the concept of both lists and generators, which are basically isomrophic, but if you want to use yield you need generators, and ..... bah. Hewitt's version wins on maths parsimony.

I don't see how "postpone" is supposed to accomplish local impurity,

OK. Solving this (not too hard) problem might help you see it:

Assume a list interface with, for old time's sake, CAR and CDR messages for obtaining the first element and the (possibly null) tail sub-list.

Write a "counter" actor (not a list actor, a stateful actor) that for each GET message returns the next number 0, 1, 2, ..... It counts the number of GET messages it gets.

Finally, give an actor definition for an infinite, lazy list of numbers, no two list elements the same number, each element returned by a single "counter" actor. (Note: the counter actor will de facto serialize access to elements of the infinite lazy list.)

What definition of coroutine do I need in order to see a coroutine in this example, and what is that kind of coroutine good for?

Here is another challenge problem that might help you see it.

Again, assuming our CAR/CDR/NIL style lists...

Two lists are unequal if one is longer than the other or if the two lists differ in any one element.

Your job is to write a procedure actor that tests two lists for equality -- but with some extra requirements.

Assume that concurrency is cheap. E.g., feel free to use an either operator where "a either b" tries to compute a and b concurrently, returning whichever one is done first.

Assume that in some cases, CAR and/or CDR may be very, very expensive.

Write a list equality procedure that exploits concurrency in order to try to quickly prove two lists are unequal, if possible.

### Inversion of control

I don't see how "postpone" is supposed to accomplish local impurity,

OK. Solving this (not too hard) problem might help you see it

I found these thought experiments pretty helpful. :) I can't believe I hadn't thought of either as being a race operator, and now I understand the kind of concurrent equality-checking you see in this example.

That said, what I mean by local impurity is that we actually write impure expressions in our code. To call it a "coroutine," I'd expect the impurity to happen in between the inputs and outputs of the procedures and operators we invoke (procedural style), rather than in between one message handler and another (Actor style) or in between storing a first-class computation and resuscitating it (CPS or monadic style). In other words, I expect some sort of inversion of control.

But my point isn't to tell anyone their definition of coroutine is wrong. Even my own most recent design sketch of "coroutines" doesn't conform to what I'm saying (but not for lack of trying...), so I'm just glad I'm starting to understand where the coroutines are in this example, from your point of view.

Well, Keean produced a much longer definition that introduces an auxiliary stack and a stateful loop [...]

I'm okay with those differences. If we're writing code that does roughly the same thing but in a different language, we'll generally have to accept some verbosity and some explicit reinvention of the computational infrastructure that the new language lacks (e.g. an unlimited call stack).

Just imagine the verbosity we'd get from porting Python code to ActorScript or Haskell if we felt the need to preserve its dynamic typing and its monkey-patchability.

[...] and which doesn't define an actual lazy list.

To me, this difference isn't a bug so much as an interesting point of contrast. There are going to be a few differences like this if the code is really playing to the strengths of yield.

Then again, this difference would be more interesting if it weren't so easy to convert between generators and lazy lists in both languages (Python and ActorScript) thanks to their impurity. (At least, I think ActorScript can do it. You basically outlined the approach in your challenge problems.)

On the maths side, it's inelegant to say we need the concept of both lists and generators, which are basically isomrophic, but if you want to use yield you need generators, and ..... bah. Hewitt's version wins on maths parsimony.

A separation of data and codata seems to exist in both languages: "FutureList is a different type than List; both are built-in."

### re inversion of control

That said, what I mean by local impurity is that we actually write impure expressions in our code.

Am I misunderstanding something? The obvious definition of a "counter" actor does this. (See the bank-balance examples in various papers, e.g. Or just the basics of the actor model: the bit about "decide how it will handle the next message" is a local state update. The "afterwards" syntax gives you the "in between" part and as well the stuff before "the hole".)

Just imagine the verbosity we'd get from porting Python code to ActorScript or Haskell if we felt the need to preserve its dynamic typing and its monkey-patchability.

I was failing to give a good hint there. Sorry for the confusion.

The obvious generator solution serializes strict evaluation of the successive elements of the fringe. The actor definition doesn't require this and the possibility of a non-deterministic equality routine makes that clear. The actor definition has wildly more (or at least more plainly obvious) opportunities for operational concurrency than does the yield version.

The point of pointing out "doesn't return a list" was that returning a (kind of) list allows evaluation of later elements in the fringe to not be strict in the evaluation of earlier elements in the fring.

Then again, this difference would be more interesting if it weren't so easy to convert between generators and lazy lists in both languages (Python and ActorScript) thanks to their impurity.

This is my point. It is *not* so easy in Python to express the concurrency the ActorScript version expresses. (I'm not saying it is impossible.)

### FutureList and Concurrency

I get the concurrency bit, and it is very true that you don't imicitly get this in Python.

The usefulness of this concurrency is marginal though as the sequence generator needs to evaluate 1 before it can evaluate 2. FutureList makes things look more parallel because it eagerly evaluates the head. You may not want this if the calculation is very slow (let's say it takes a year to yield each value) now you have to wait an extra year just because future list wants to eagerly hold the head. You don't really get any parallelism anyway as you have to wait 1 year for the first value to be returned before the generator is in the correct state to request the second value. This applies to fringe tok as you do not know what index a fringe element has until you have counted all those to the left.

How go I control the concurrency? Let's say I want a sequential version of FutureList equality (because the node-to-node communication cost is higher than the calculation cost, and my compiler is not smart enough to figure that out).

### Objects; thunks; concurrent evaluation

Am I misunderstanding something? The obvious definition of a "counter" actor does this.

I might be missing something, but the examples I've seen look a lot like object methods. The Actor receives a request, it evaluates an impure expression, and then it takes the return value and sends it as a response. It never sends a response or receives another request partway through the expression as a side effect, so the client is not acting as an effect handler.

The obvious generator solution serializes strict evaluation of the successive elements of the fringe.

If that's a problem, I'd fix it by yielding thunks. That refactoring doesn't touch the use of yield, so if it's not "easy," I don't think it's not a problem with yield so much as Python.

It is *not* so easy in Python to express the concurrency the ActorScript version expresses.

I think I'd actually agree with an even stronger point: It's not easy to combine yield with the way ActorScript tries to make concurrency easy to express. When subexpressions are evaluated concurrently, (yield(1) + yield(2)) is ambiguous.

In other languages, when there's a left-to-right order, no ambiguity arises: We respond to the most recent request with 1. We receive a new request carrying the result of yield(1). We respond to that request with 2. We receive a new request carrying the result of yield(2). We add the two results and proceed into the parent expression.

To resolve the ambiguity for concurrent evaluation, we'd need some way to communicate across subexpressions. There could potentially be named pipes like (yield <a >b (1) + yield <b >c (2)), but the scoping and static typing of these labels might be finicky.

### re clients as effect handlers

I might be missing something, but the examples I've seen look a lot like object methods. The Actor receives a request, it evaluates an impure expression, and then it takes the return value and sends it as a response. It never sends a response or receives another request partway through the expression as a side effect, so the client is not acting as an effect handler.

As with the earlier discussed actor for computing a natural number of unbounded size, Actors can loop by sending themselves messages, handling and making other requests in the midst of such a loop.

### Explicit continuation state

That seems unrelated. I'm not necessarily even talking about looping. What I'm saying is that there's a difference between having responses and requests partway through an expression's evaluation:

input => yield(yield(input + 1) + yield(2))

Versus encoding every response as a return value and every request as a variable:

input => {
if (this.computationState == 1) {
this.computationState := 2
return Just(input + 1)
} else if (this.computationState == 2) {
this.firstInput := input
this.computationState := 3
return Just(2)
} else if (this.computationState == 3) {
this.computationState := 4
return Just(this.firstInput + input)
} else {
return Nothing()
}
}


There's not quite such a big difference if we use a "become another Actor" idiom, because at least then we can at least express the original program in continuation-passing style. (That's still quite a difference though....)

Perhaps it'll confuse you that I forgive the explicit stack state in Keean's Python example but I complain about the explicit continuation state here. That's because in this case it's actually relevant; Keean's explicit state is due to a lack of unlimited recursion, while this explicit state is due to a lack of yield.

### Lazy message streams?

What exactly is going on in SameFringe? To incrementally compare the finges, Fringe must return a lazy stream of messages, but somehow Fringe must not run on ahead, but wait for SameFringe to request the next message.

Semantically you want to pass the tree to Fringe and them have it return the first element of the result stream only, and suspend until the next element of the result is needed. Is [String*] a special type for lazy streams?

I am not sure I see the advantage over using yield. The cost seems to be creating more special case language features, and Haskell has shown that writing efficient lazy code is not necessarily intuitive nor easy. Yield has the advantage of behaving like strict imperative code, with which it is easy to write efficient code.

What?!?

### Yield is lazy

Yield is used in strict languages to provide lazy semantics. Consider the following fibbonachi generator:

function fib() {
var x = 1;
var y = 1;
while (true) {
yield x;
var t = y + x;
x = y;
y = t;
}
}

function take(n, f) {
var s = Array(n);
var g = f();
for(var i = 0; i < n; ++i) {
s[i] = g.next();
}
return s;
}

var first3fib = take(3, fib);


In a lazy language fib simply returns a list (take is a common function in Haskell).

fib :: [Int]
fib = f 1 1 where
f x y = x : f y (x + y)

first3fib = take 3 fib


Hence you can see in a functional language the list returned by fib must be lazy not strict (otherwise fib loops forever).

### re yield is lazy

I was asking about the phrase "return a lazy stream of messages". I'm not sure what that phrase would mean in an actor context.

LazyFringe returns a (lazily constructed) list.

### Not a single message

I don't see how you can return a lazy list in a single message. You would need to return the head of the list as a message, and then get each successive head from the tail by sending a new request, returning the next item in the lazy list (as it is important not to do the calculation until the next item is actually needed).

However, you want this to look like a lazy list in the source code, so somehow the lazy list has to encompass the semantics of sending a request for each subsequent item, and waiting for the response.

It means that operations that are normally local to an actor (like pattern matching on a list) might cause remote actions (message round trip). It also means that to read several items from the list would result in a lazy stream of messages.

### re not a single message

I don't see how you can return a lazy list in a single message. You would [...return a lazy list this way ...]

You seem to see how perfectly well.

the lazy list has to encompass the semantics of sending a request for each subsequent item,

I don't know what it means for a "lazy list [to] encompass semantics".

It means that operations that are normally local to an actor (like pattern matching on a list) might cause remote actions (message round trip).

I don't know what are "operations that are normally local to an actor (like pattern matching on a list)".

A list is an actor, of course.

It also means that to read several items from the list would result in a lazy stream of messages.

There is no "stream of messages".

### Technical Terminology

Are you perhaps reading this with some formal definition of a stream of messages? I simply mean that there will be a stream (more than one message sent sequentially relating to the same thing), and this happens on demand (its lazy).

I don't think a lazy-list is an actor at all. I think it is an interface. The actor that generates the lazy list is the "producer" actor and the consumer of the list would request the next item from the producer via the interface. Because its lazy, you cannot separate the list from the producer, as you need to able to calculate the new values on demand. A lazy list is like a pipe that connects the producer to the consumer, it is neither the producer nor the consumer. With actors I think the lazy list itself disappears because it would be simply forwarding 'next' requests from the consumer to the producer, and forwarding the responses the other way.

It would be good to see the definition of the equality operator on lazy lists.

### FutureList

An Actor of FutureList◅aType▻ is either
1. the empty list of type FutureList◅aType▻ or
2. a list whose first element is of aType and whose rest is of Future◅FutureList◅aType▻▻.

### Why FutureList<a>

Why is a FutureList a better solution than an interface on the producer with a 'next' method? After all you have to pass around the producer (even if wrapped in a FutureList) as you need it to calculate the next values on demand.

In fact FutureList is making it less-lazy, as it forces eager evaluation of the head of the list. It also doesn't really do anything, as the rest of the list is just the producer, you may as well just use the producer directly via an interface.

I would still like to see the definition of equality on FutureLists.

Note: I don't think this affects the use of Postpone in the example, so it may not be the main point here, but I am still interested in whether using interface would simplify the example.

### humpty dumpty (re technical terminology)

Are you perhaps reading this with some formal definition of a stream of messages?

I am reading it as if it were your intention to speak carefully, plainly and to be understood.

I simply mean that there will be a stream (more than one message sent sequentially relating to the same thing), and this happens on demand (its lazy).

As you know, message send events are, in general, partially ordered.

I don't think a lazy-list is an actor at all. I think it is an interface.

In any event, Hewitt corrected you.

The actor that generates the lazy list is the "producer" actor

Multple actors generate various parts of the list. Which one do you mean?

What is a ""producer" actor"?

Because its lazy, you cannot separate the list from the producer,

Every actor is distinct from the actor that created it.

as you need to able to calculate the new values on demand.

A future encapsulates the computation needed to lazily produce a value.

A lazy list is like a pipe that connects the producer to the consumer, it is neither the producer nor the consumer

I don't know what "like a pipe that connects the producer to the consumer" means in this context.

A lazy list is an actor of a particular type.

With actors I think the lazy list itself disappears because it would be simply forwarding 'next' requests from the consumer to the producer, and forwarding the responses the other way.

In general, the definition of an Actor in ActorScript does not determine a physical representation (it only imposes constraints on possible physical representations).

### Producer waits til next is requested; consumer waits for next

The producer waits til the next string is requested; consumer waits for next the next string.

What does the corresponding code look like using Yield?

### Fibbonachi

The fibbonachi generator above is a better example for yield as it shows how it works for an infinite sequence. This really needs yield as 'fib' would never return otherwise. The original example at the top would still work with strict semantics, but is more efficient done lazily as you can terminate on the first difference in fringes.

### Code for PostponedFringe and SameFringe? using *Yield*

What does code for PostponedFringe and SameFringe? look like using Yield?

Edit: renamed to "PostponedFringe".

### LazyFringe using Yield

Here it is in Python (as not all JavaScript implementations provide yield):

class lazy_fringe:
def __init__(self, tree):
self.tree = tree

def __iter__(self):
s = [self.tree]
while s:
node = s.pop()
leaf = node.get('leaf');
if node.get('leaf'):
print leaf,
yield leaf
else :
right = node.get('right')
if right:
s.append(right)
left = node.get('left')
if left:
s.append(left)

def contents_equal(left, right) :
left_iterator = iter(left)
right_iterator = iter(right)
while True:
try: l = next(left_iterator)
except StopIteration:
try: r = next(right_iterator)
except StopIteration:
return True
return False
try: r = next(right_iterator)
except StopIteration:
return False
if r != l:
return False

a_tree = {
"left" : {
"left" : {"leaf" : "The"},
"right" : {"leaf" : "girl"}
},
"right" : {
"leaf" : "."
}
}

a_lazy_list = lazy_fringe(a_tree)

print "1. [",
b1 = contents_equal(a_lazy_list, ["The", "girl", "."])
print "]", b1

print "2. [",
b2 = contents_equal(a_lazy_list, ["The", "boy", "."])
print "]", b2


Edit: I have updated this to be the same as the later version, as I think is now a better solution in Python.

### PostponedFringe must return a list of type FutureList◅String▻

LazyFringe must return a list of type FutureList◅String▻.

Edit: renamed to "PostponedFringe".

### Must?

What do you mean "must return a type FutureList<String>"? In the Python example it returns a string generator, which is lazy so that same_fringe only causes lazy_fringe to produce nodes until the first difference is found, and it is equivalent to a FutureList<String> (actually because Python is dynamic, it could be any type in the tree, not just a String). As the type system is different the exact types are not going to be the same, but the functionality is the same.

I may as well say "FutureList<String> is not Pythonic", which although true adds nothing to the discussion.

### PostpoFringe must return a list (typing not possible in some PL)

PostponedFringe must return a list. (Explicit typing is not possible in some programming languages.)

Edit: renamed to "PostponedFringe".

### It returns a generator.

A generator is a lazy-list in Python. Here we get the generator returned from lazy_fringe:

left_fringe = lazy_fringe(left)


So "left_fringe" is effectively the lazy list. Later we get the head of the lazy list:

l = next(left_fringe)


This 'pops' the head from left_fringe consuming it, so that after this statement, left_fringe is the rest of the list (without the head), so the the next call to 'next' returns the next element, lazily.

### re it returns a generator

So "left_fringe" is effectively the lazy list.

The python version does not "effectively" return a python list. For example, Python's list operators can not be directly applied to a generator.

### Python Generators.

I think this is an irrelevance. You could do that, its just not how Python does it. There is no reason you could not define regular list operators to work on generators.

### re python generators

I think this is an irrelevance. You could do that, its just not how Python does it. There is no reason you could not define regular list operators to work on generators.

Show us.

### What's to show?

What's to show? I define the generator API to be the same as a python list object, such that, list.pop() calls next(list).

### re what's to show?

What's to show? I define the generator API to be the same as a python list object, such that, list.pop() calls next(list).

I am curious to compare what the code you have in mind actually does, and how, compared to the meaning of the ActorScript code in question.

(I assume that "pop" is not the only Python list method.)

### Iterator Interfaces

In most functional languages head and tail are the only primitive list operations. This is reflected structurally in the FutureList ActorScript type. That Python provides other methods on its list object is a symptom of Pythons object-orientedness.

Let's look instead at structure. Algorithms access data according to 'access-patterns', which might be (non-exhaustively) single-pass-sequential, multi-pass-sequential, bidirectional, indexed and random. Functional lists are used for the first two access patterns (sequential access). Looking at the read side, for multi-pass-sequential we need head and tail, for single-pass just pop. Both of these can be represented by an read-only iterator with the following methods source (read only deref) and successor (move to next element). Multi pass iterators are copyable, single pass ones not.

In the python example above 'next' performs the function of both source and successor. So the only change needed to python to enable full list functionality on generators would be to replace 'next' with 'source' and 'successor' then implement an iterator for normal lists with the same two methods. Lists themselves might have other methods like empty, size, etc, but all access would be through the iterators in both the strict and lazy cases.

As you can tell, I think a data type is defined by the functions you can perform on it. In a duck-typed language like python, this means two objects simply have to provide the same methods, (a bit like an ad-hoc interface). If we settle on iterators for collection access (which us both sufficient and optimal) then generators and lists just need to have methods that return iterators that implement the same methods, and for read only lists that would be 'source' and 'successor'.

### re Iterator Interfaces

[...not code...]

So, no code then?

"There is no reason you could not define regular list operators to work on generators". -- K.S.

### Iterator example code

I thought the explanation was more helpful, but I can update the code if you like:

def lazy_fringe(tree):
s = [tree]
while s:
node = s.pop()
leaf = node.get('leaf');
if leaf:
yield leaf
else :
right = node.get('right')
if right:
s.append(right)
left = node.get('left')
if left:
s.append(left)
yield None

def contents_equal(left, right) :
l = left.begin()
r = right.begin()
while True:
if l.source() != r.source(): return False
if l.source() is None: return True
l = l.successor()
r = r.successor()

def same_fringe(left, right) :
return contents_equal(
lazy_fringe(left).first(),
lazy_fringe(right).first())

f1 = {"left" : {"leaf" : "The"}, "right" : {"leaf" : "girl"}}

print same_fringe(f1, ["The", "girl"])   # True
print same_fringe(f1, ["The", "boy"])   # False


The definition of the generator (lazy_fringe) remains unchanged, and now contents_equal can be used on at least a single pass sequential iterator. In our modified Python dialect, lists and generators both have an interface where 'first()' returns an iterator pointing to the first element. In fact contents_equal would probably be a standard function, and same_fringe becomes more or less redundant, as you are just comparing fringe objects for equality using their iterators, which would be a standard technique not really requiring a separate function.

In a statically typed language you would overload the equality operator on the iterator interface, allowing the code to be simply:

(lazy_fringe(left) == lazy_fringe(right))


With no loss of efficiency.

### Code should be transparent; obviously meet its specifications

Code should be transparent; i.e., obviously meet its specifications.

Unfortunately, the example above fails the test of being obviously equivalent to the code at the beginning of this topic :-(

### Euclid and thousands of years of mathematics.

I think you need to justify such a statement, otherwise its just polemical hyperbole.

The definition of same_fringe looks very similar the the definition you give at the top, and I go on to define equality on containers, which you don't define, but just assume that somehow it is obvious how to compare two lists for equality. If I made the same assumption I would not have bothered to define contents_equal, but only posted the final definition of same_fringe.

There is nothing difficult to understand about the imperative code I posted, and in fact imperative code generally seems easier to understand. It's the way people naturally describe algorithms.

It is obvious what contents_equal is doing, and it is written in a way that is natural, the way that mathematicians have written algorithms for thousands of years, going back to the ancient Egyptians and the Rhind Papyrus.

contents_equal is a wonderful and beautiful generic algorithm that works on all containers. The single function expresses a great deal about the nature of equality.

Finishing with a quote from Stepanov:

I believe that iterator theories are as central to Computer Science as theories of rings or Banach spaces are central to Mathematics.

### re polemical hyperbole

The definition of same_fringe looks very similar the the definition you give at the top

Does your code serialize computing the strings from leaf nodes?

### It does serialise computing the fringe of a tree.

The generator is given in "lazy_fringe" in the first example. It serialises computing the fringe of a tree for any leaf type, not just string, so it does the same but is more generic.

There is no change to this in the second example, so I did not repeat it. The only change was to make the generator returned implement a unified interface with a normal eager list. The first example will run in an unmodified python interpreter, the second requires modifying the python interpreter as "y = next(x)" is "y = x.source(); x = x.successor()" we need to change the interface of python generators, so that they could be used in any list algorithm. I guess the designers of Python did not understand iterator theory. This is however irrelevant to my point about what such code should look like.

I will copy the generator into the second example, and update the test code to use a list as the updated example at the top now does for clarity.

### re "It does serialise computing the fringe of a tree."

Well, that makes it pretty different from the ActorScript code.

The biggest difference is that you need to be strict enough in successive fringe elements to distinguish them from "None".

### Serialisation is unavoidable

Serialisation is unavoidable, as you need to count the number of fringe nodes to the left of any fringe node to know its index.

Effectively the problem us to assign a 1 to the leftmost fringe node and them successively higher numbers to each fringe node moving right along the fringe. This cannot be parallelised, and is the core problem.

I see what you are getting at though, if string conversion is slow, it could be parallelised, but you have to know where the data is. If the tree is on one CPU of a grid computer, often it will take longer to send the node contents to another CPU to be converted to a string and sent back than to just simply do the serialisation locally. But I could imagine some very costly operation where it became worth it.

However the parallelization is wasteful of resources, as you only need to find one difference to declare the fringes different. So if the first elements are not the same, in order to exploit the parallelism we would need to request the equality checks on successive list elements before the comparison of the first element has completed, which is very wasteful of power. It will also use more memory, as there is more "work-in-progress".

### re Serialisation is unavoidable,

Serialisation is unavoidable,

That's false.

However the parallelization is wasteful of resources, as you only need to find one difference to declare the fringes different.

Time is a resource.

Speaking of which...

### Time is not the only resource

I was referring to the below, and that step (3) requires serialisation. So it is unavoidable in this algorithm. Unavoidable does not mean that all parts of the algorithm must be serialised, only that at some point every element is in a sequence.

1. finding the fringe, parallelism is limited by the branching factor of the tree
2. getString (leaf operation) parallelism is limited by the number of leaves
3. writing to list, requires serialisation.

So we have to serialise in (3) irrespective of how parallel we can make (1) and (2). If (3) dominates the time, then parallelising (1) and (2) is not going to gain anything, and could slow things down due to the time to send messages and copy data to enter CPU nodes.

If you run out of memory, you will produce no answer at all, whereas time will just take a little longer.

My best guess is that 'Postpone' reserves space in a stack for the result of the postponed computation, so that the computation can occur in parallel. Then the only un-postponed computation returns the first result, then when the next result is requested it returns the result of the top-of-stack computation? Does that sound right?

However this is an eager algorithm, it computes at many getStrings in parallel as it can, this is diametrically opposed to the idea of a lazy algorithm, where we consider that we may only need the first 3 elements of the output fringe, and therefore we do not want to compute the 'getString' until the value is asked for.

I understand your comment about time, I spent longer than I had intended writing those examples in three different languages...

### postpone (re keean on serialization)

My best guess is that 'Postpone' reserves space in a stack for the result of the postponed computation,

You don't have to guess. This was given in the earlier linked Actor paper.

The expression given to postpone will eventually resolve to some actor.

Postpone delays working on resolving that actor until a message is sent to be processed by the resolved expression.

In a meta-circular interpreter (see page 44 of the Actor paper linked previously), Postpone returns an actor that, when it gets a message, then resolves the postponed expression.

### Not parallel then?

So it does not execute the postponed computation until it is requested by the message. This is what I thought originally, so I was confused when you said it executes in parallel. The 'Fork' in the fringe computation follows one branch, and postpones the other, so there is at most only one computation happening at a time. I don't see the parallelism.

It doesn't actually specify if a stack is used for the postponed computations, but I guess that is considered a realisation of the model, and therefore not part of ActorScript. I probably should have said, this behaviour could be implemented using a stack of postponed computations.

### re: I don't see the parallelism.

It depends how "client" actors use the fringe list and how the tree is defined.

### The Consumer

I think I see, so if the consumer keeps requesting 'next' before it has received the responses so far, then these computations will be in parallel. I am not sure I see how the tree definition affects it though? Unless you just mean the bifurcation of the tree acts as a limit on parallelism as the consumer cannot request nodes in advance of the producers tree traversal?

### re I am not sure I see how the tree definition affects it thoug

I am not sure I see how the tree definition affects it though?

Can getstring return a future?

### By changing the definition?

Do you mean by changing:

Actor Leaf.[aString:String] extension Tree using
getString[ ]:String → aString▮


to something like:

Actor Leaf.[aString:String] extension Tree using
getString[ ]:Future<String> → aString▮


### re by changing the definition?

to something like:

  Actor Leaf.[aString:String] extension Tree using
getString[ ]:Future → aString▮


That doesn't look right to me but if I wanted to give you a precise answer I would look at the actor paper. You can do that.

### The Book

I've bought the book, so hopefully that will help.

### Thanks!

Thanks!

It contains additional motivation :-)

### Serialisation is

Serialisation is unavoidable, as you need to count the number of fringe nodes to the left of any fringe node to know its index.

The index is needed if the result is being written into an array. But not if the output is a list. So the fringe computation does not require serialisation, although output into a serial form does.

In this particular case using an array is a form of premature optimisation, passing strings and lists of strings about scales better (log n totally ordered points rather than n).

### Serialisation

I think you just mis-understand my argument. My argument about indexing was about putting the fringe elements into a list, not an array. But I think you get that from "although output into a serial form does" which was the point I was making.

To parallelise get-string we want not a list, but another another tree. So that we can run the left and write fork in parallel and write into the correct place in the tree.

In effect what we want is to run 'getString' on the fringe of tree1 to produce tree2 where all the leaves are strings, and then we can iterate along the fringe of tree2 to serialise it into a list.

### I see your point. I thought

I see your point. I thought that you were referring to the calculation of the fringe, rather than the comparison between the two fringes. This would be the serial part (3) in your breakdown in a different reply.

### A Version In Python

Due to pythons strange iterators, the lack of an interface for deriving iterators, and lack of overloading, it looks quite messy compared to my suggested 'modified Python' above, but here is a version in plain Python:

class lazy_fringe:
def __init__(self, tree):
self.tree = tree

def __iter__(self):
s = [self.tree]
while s:
node = s.pop()
leaf = node.get('leaf');
if node.get('leaf'):
print leaf,
yield leaf
else :
right = node.get('right')
if right:
s.append(right)
left = node.get('left')
if left:
s.append(left)

def contents_equal(left, right) :
left_iterator = iter(left)
right_iterator = iter(right)
while True:
try: l = next(left_iterator)
except StopIteration:
try: r = next(right_iterator)
except StopIteration:
return True
return False
try: r = next(right_iterator)
except StopIteration:
return False
if r != l:
return False

a_tree = {
"left" : {
"left" : {"leaf" : "The"},
"right" : {"leaf" : "girl"}
},
"right" : {
"leaf" : "."
}
}

a_lazy_list = lazy_fringe(a_tree)

print "1. [",
b1 = contents_equal(a_lazy_list, ["The", "girl", "."])
print "]", b1

print "2. [",
b2 = contents_equal(a_lazy_list, ["The", "boy", "."])
print "]", b2


Note: I have added a print to the generator, so that it is clear that the comparisons are lazy, IE the generator only produces the necessary elements to prove inequality.

Edit: improved version, by making lazy_fringe an object (as list is an object) we get a more uniform interface, in that we can get an iterator from both a list and a lazy_fringe using the same Python syntax "iter(object)". In this way we can bring getting the iterator into contents_equal, and therefore allow any iterable object to be compared in the same way.

### Simpler version

It looks as if there is at least one level of unnecessary abstraction in the python example: why build a list and use yield when the result of the generator can be cast directly to a list? It is already a sequence type, and adding another is not needed (also at a quick glance the ordering of the list produced for the fringe seems to have a subtle bug).

The only messy part is using passing a generator to a generator inside Python. This for x in generator() idiom below is quite standard and minimal although there have been noises about putting direct support for it into Python3 - can't remember if that went anywhere.

def fringe(tree):
if tree.get('leaf'):
yield tree.get('leaf')
for seq in ('left','right'):
if tree.get(seq):
for label in fringe(tree.get(seq)):
yield label


### Recursive Version

I deliberately wrote a non-recursive version due to Python's recursion limit. Python is not a functional language, and does not do tail-call elimination. Increasing the recursion limit from 1000 is considered dangerous, so the recursive version is only good for small trees.

I don't think the recursive version is simpler. It is less lines, but I could remove the variable definitions, top make the non-recursive version essentially a loop with three conditional statements, a yield and two appends. I think the "for seq" loop makes it more complicated not less.

def lazy_fringe(tree):
s = [tree]
while s:
node = s.pop()
if node.get('leaf'):
yield node.get('leaf')
if node.get('right'):
s.append(node.get('right'))
if node.get('left'):
s.append(node.get('left'))


### Doesn't that drift a little far from the point?

Originally this thread seemed to start as a comparison of fringe as an actor providing a future, and fringe as a generator. The question originally would appear to be: what are the semantic differences and how would they affect the evaluation?

Placing the stack explicitly in the program, or relying on the runtime to maintain the stack does not appear to provide any illumination about the original question. If the runtime can be told to provide a bigger stack and the program will execute anyway then what has been achieved by making the stack explicit? It does not appear to make the expression of the program easier. It does not appear to illustrate any difference between the two approaches.

The raw issue is that generators cannot be composed cleanly without something that simulates the for x in g.next(). Sure, you have glued it together in another way, but now you are manually maintaining a stack in the program. Would you claim that this is clearer, or simpler, or tells us more about the evaluation?

The slightly more interesting question that seems to have been lost on a tangent is whether or not lists and their operators can be replaced by generators. In the finite case I would say that they probably can - with most of the resulting translations being available in Python's itertools module. Thomas may have suggested that there were examples that could not be translated. Or then again perhaps he did not, I found that part of the conversation a little opaque. In the infinite case the question is a bit undefined and the best we could say is that certain operations on infinite generators are "roughly similar" to list operators, and again they are well represented in itertools.

### Iterators

Sorry for the diversion, although I stand by my opinion that the non-recursive version is simpler, and easier to understand. It doesn't really maintain a stack, but a queue of nodes to visit next, although we visit the nodes in the reverse order to the order they are queued for efficiency, so we end up using push and pop, making it look like a stack.

I think the code in Stepanov's "Elements of Programming" represents some of the most beautiful code I have ever read. Algorithms are defined over data-access-patterns, not containers, so the difference between a generator and a list is irrelevant. There are not just simple iterators like in Python, but a whole class of sequential iterators (single-pass, multiple-pass, bidirectional, indexed and random) plus there are iterators for trees (a binary tree has a bifurcating-iterator) and for multi-dimensional structures (coordinate-iterators). Python somewhat obscures this by having very limited iterators.

I think its obvious that a unified interface for lists and generators is possible, and can be done elegantly (although maybe not in Python). See my improved version above, where I define lazy_fringe as an iterable object, and have an updated contents_equal that will work on any pair of iterable objects.

### Once upon a time, in a land far far away

Hi,

Once upon a time, in a land far far away logic programming
existed. And people were thinking about how they could change
the standard execution order. And they came up with freeze/2,
which allowed certain co-routines.

If your Prolog interpreter doesn't have freeze/2 you can
use the following meta-interpreter:

:- use_module(library(lists)).

solve(G, I, O) :- select(freeze(T,F), I, H), T, !, solve((F,G), H, O).
solve(freeze(T,F), I, O) :- T, !, solve(F, I, O).
solve(freeze(T,F), I, [freeze(T,F)|I]).
solve(true, I, I).
solve((A,B), I, O) :- solve(A, I, H), solve(B, H, O).
solve(H, I, O) :- advised(H, B), solve(B, I, O).

advised(freeze(_,_), _) :- !, fail.
advised(true, _) :- !, fail.
advised((_,_), _) :- !, fail.
advised(H, B) :- clause(H, B).

The fringe/same example I would name more appropriately
flatten/equal example. Here is a realization in standard
execution order:

flatten(leaf(X),[X|I],I).
flatten(fork(X,Y),I,O) :- flatten(X,I,H), flatten(Y,H,O).

equal([],[]).
equal([X|Y],[X|Z]) :- equal(Y,Z).

We immediately recognize a drawback. If we call
flatten(T1,L1,[]), flatten(T2,L2,[]), equal(L1,L2)
then L1 and L2 will be materialized before they are
handed to the equal predicate. So we use a lot of
space.

If the interpreter has freeze AND good tail recursion,
respectively last call optimization how it is called in
the WAM, then we can go without this additional space.
Here is freeze used to pull Prolog iteration from a
result argument:

flatten(X,I,O) :- freeze(nonvar(I),flatten2(X,I,O)).

flatten2(leaf(X),[X|I],I).
flatten2(fork(X,Y),I,O) :- flatten(X,I,H), flatten(Y,H,O).

It does indeed freeze:

?- solve(flatten(fork(leaf(a),
fork(leaf(b),leaf(c))),X,[]),[],L).
L = [freeze(nonvar(X),flatten2(fork(leaf(a),
fork(leaf(b),leaf(c))),X,[]))]

But it does also unfreeze and thus solve the problem:

? solve((flatten(fork(leaf(a),
fork(leaf(b),leaf(c))),X,[]),flatten(fork(fork(leaf(a),leaf(b)),
leaf(c)),Y,[]),equal(X,Y)),[],L).
X = [a,b,c],
Y = [a,b,c],
L = []

So basically in Prolog we can have multiple iterators
in parallel, their invocation controlled by the guard
of freeze/2. But this are onyl co-routines, not concurrent
execution.

Bye

### Using pattern matching for understand-ability

Pattern matching can be used for increased understand-ability. For example,


PostponedFringe.[aTree:Tree]:FutureList◅String▻  ≡
aTree �
↑↑Leaf[aString] ⦂ [aString] ⍌
↑↑Fork[aLeft, aRight] ⦂ [⩛Fringe.[aLeft],  ⩛Postpone Fringe.[aRight]]▮


For example, ["The", "boy"]▮ is equivalent to the following:
PostponedFringe.[Fork[Leaf["The"], Leaf["boy"]]]▮

The procedure PostponedFringecan be used to define SameFringe? that determines if two trees have the same fringe [Hewitt 1972]:

   SameFringe?.[aTree:Tree, anotherTree:Tree]:Boolean ≡  //  test if two trees have the same fringe
PostponedFringe.[aTree] = PostponedFringe.[anotherTree]▮


For example, [False, True]▮ is equivalent to the following:

Let aList ← PostponedFringe.[Fork[Leaf["The"], Leaf["boy"]]]｡    // aList = ["The", "boy"]
[SameFringe?.[aList, ["The", "girl"],              // False
SameFringe?.[aList, ["The", "boy"]]]▮           // True


Note that Leaf can be defined as an extension of Tree:

Actor Leaf[_:String] extension Tree▮


and Fork can be defined as an extension of Tree:

Actor Fork[_:Tree, _:Tree] extension Tree▮


Edit: renamed to "PostponedFringe".

### Simple in Haskell

The Python code above is a bit verbose. For comparison, I believe this is exactly the same program as:

data Tree = Leaf String | Node Tree Tree
fringe (Leaf l) = [l]
fringe (Node a b) = (fringe a) ++ (fringe b)


Up to the reuse in a comparison being implicitly defined on the lazy lists:

fringe(Tree(Leaf "The") (Leaf "boy")) == ["The","boy"]    // true


### Discrimination is not the same as extension

There can be other types of trees besides Leaf and Fork.

### Lost me there.

I'm afraid to say that while I understand your words and can see that they are true, I am at a loss to draw a connection. While the Tree datatype is concrete, it is simple enough to convert it into a generic class? Extending your example to another type of tree would provide some clarity.

### Use type-classes

You just need to use type-classes, but as you say, it does not seem relevant to the original point.

### Tree can be extended

Tree can be defined as follows:

Interface Tree with getHash[ ]↦Integer▮


Leaf can be defined as an extension of Tree as follows:

Actor Leaf[aString:String] extension Tree using
getHash[ ] → Hash.[aString]▮


and Fork can also be defined as an extension of Tree:

Actor Fork[aLeft:Tree, aRight:Tree] extension Tree using
getHash[ ] → Hash.[aLeft.getHash[ ], aRight.getHash[ ]]▮


Of course the definition of PostponedFringe is still the same:


PostponedFringe.[aTree:Tree]:FutureList◅String▻  ≡
aTree �
↑↑Leaf[aString] ⦂ [aString] ⍌
↑↑Fork[aLeft, aRight] ⦂ [⩛Fringe.[aLeft],  ⩛Postpone Fringe.[aRight]]▮


Edit: fixed bug in definition of Fork.

### Hash is an algorithm

I am confused here. 'getHash' is an algorithm on trees not an extension of the tree data type, and that hash function looks weird, it takes the hash of the string in the leaf, and then there is a mysterious two argument hash that is computing the hash something of the hash of the left branch and hash of the right branch. To me "hash (fringe tree)" would seem more sensible, as we then hash over a list.

Here's the Haskell version using type classes, so that 'fringe' is generic over all types that are instances of Leaf and Fork.

{-# LANGUAGE TypeFamilies, FlexibleInstances, FlexibleContexts, UndecidableInstances #-}

import Data.Char

-- specification

class Hash x where
hash :: x -> Int

class Leaf t where
type ValueType t :: *
isLeaf :: t -> Bool
getLeaf :: t -> ValueType t

class Fork t where
isFork :: t -> Bool
getLeft :: t -> t
getRight :: t -> t

-- implementation

fringe t | isLeaf t = [getLeaf t]
fringe t | isFork t = fringe (getLeft t) ++ fringe (getRight t)

data T x = L x | F (T x) (T x)

instance Leaf (T x) where
type ValueType (T x) = x
isLeaf (L _) = True
isLeaf _ = False
getLeaf (L x) = x

instance Fork (T x) where
isFork (F _ _) = True
isFork _ = False
getLeft (F l _) = l
getRight (F _ r) = r

instance Hash [Char] where
hash [] = 0
hash (x:xs) = (ord x + hash xs) mod 8

instance Hash x => Hash (x, x) where
hash (x, y) = (hash x + hash y) mod 8

instance (Hash x, Hash (x, x)) => Hash (T x) where
hash t | isLeaf t = hash (getLeaf t)
hash t | isFork t = hash (getLeft t, getRight t)

-- example

main = do
putStrLn $show$ fringe (F (L "The") (L "boy")) == ["The", "boy"]
putStrLn $show$ hash (F (L "The") (L "boy"))


Note: The definition of hash looks a bit more complicated than the ActorScript version, as I have actually defined the hash algorithm, so that there is a complete working program. If we just look at the parts equivalent to the given ActorScript definitions we would have:

class Hash x where
hash :: x -> Int

instance (Hash x, Hash (x, x)) => Hash (T x) where
hash t | isLeaf t = hash (getLeaf t)
hash t | isFork t = hash (getLeft t, getRight t)


### Haskell is Lazy

Of course Haskell is lazy, so this sort of thing is a good semantic fit for the language. The problem with Haskell is lazy performance is not easy to understand, and adding strictness annotations to keep memory use under control, and performance good is not straight forward. The benefit of 'yield' in an eager language is that you get good performance and memory use by default for both the strict and lazy cases.

### In Haskell, execution is always postponed

In Haskell, execution is always postponed.

More general programming languages can explicitly mark postponed execution using Postpone. They do not need Yield ;-)

### In Haskell execution is sometimes not postponed

In Haskell you can request eager evaluation with the "!" prefix operator. So functions are implicitly "postponed", but can be explicitly not-postponed. So lack of generality is not a valid criticism. What is a valid criticism is that pervasive laziness seems to be the wrong decision as people find it hard to write efficient lazy programs, so I thin you have got it the write way round with explicit laziness. In general laziness has a problem in that it has implicit buffering and memoisation, which again can lead to unexpected inefficiencies and memory usage, and this is a problem you don't have with yield. You can of course build buffering and memoisation on top of yield.

### Same-fringe = bad example

Same-fringe is a bad example for co-routines, since it can be easily implemented with thunks (with or without memoization). Yield/coroutines are bigger tools.
See Yield : Mainstream Delimited Continuations. A more interesting example should demonstrate two-way communication.

By the way, using ActorScript to illustrate your point only makes it more difficult for people to interpret your message. Doubly so because many characters in your posts don't render correctly (at least on either of my Macs).

### SameFringe? uses 2-way communication

SameFringe? uses 2-way communication because producers await requests for next elements and consumers request next elements.

### Two-way coroutine

In that case, you might say { print "hello"; print "world"; } is an example of two-way I/O because the program awaits confirmation that the user hasn't terminated the process, and the user decides whether to terminate the process. But that's as one-way as mainstream programs usually get.

Brandon Bloom is talking about the ability to continue the coroutine with a value passed in from the outside. To the coroutine, it looks like it's reading input.

### Generalized Yield

Exactly right.

The paper I linked discusses this subtlety in detail including various partial realizations of the idea in mainstream languages. In particular, JavaScript, Python, and Ruby all support some notion of input-from-yield.

In JavaScript (ES6), the generator's next method takes an optional input value.

In Python, you can use the generator.send method.

In Ruby, the return value of a block is provided to yield:
 irb(main):001:0> def f(); "yield returned: " + yield; end => nil irb(main):002:0> f { || "foo" } => "yield returned: foo" 

Ruby's approach is particularly worth studying because coroutine communication is often used in preference to parameter-passing for typically higher-order combinators. For example, Ruby's Array#map function is implemented in terms of two-way communication of coroutines.

### Ruby's yield is just a parameter

Ruby's approach is particularly worth studying because coroutine communication is often used in preference to parameter-passing for typically higher-order combinators. For example, Ruby's Array#map function is implemented in terms of two-way communication of coroutines.

As far as I understand it, Ruby's yield keyword is nothing but a slight syntactic alternative to writing my_yield.call and adding &my_yield to the parameter list.

If we tried to shoehorn coroutine examples into Ruby's yield, I think the coroutine's client would have to be written in continuation-passing style, at least until we turned to Ruby's first-class continuation support for help.

### You're right

I was misremembering. I haven't written Ruby in quite a while!

What I was remembering is that &block in the parameter list requires reification of the lambda, where as just calling yield doesn't. Still interesting.

### "Hairy control structure" didn't work out in practice

Landin's J (called "hairy control structure" by Sussman and Steele) didn't work out in practice.

See discussion in Inconsistency Robustness.

### Non-sequitur

Once again, you've changed the subject. This time from the discussion of thunks being less expressive than generalized yield, to a discussion of how an old realization of an idea didn't work. Meanwhile, you've failed to recognize the many more recent realizations of this idea that have worked (Python, Ruby, JavaScript, C#). This is to say nothing of the recent formalization of the equivalences between yield, delimited continuations, and monads.

### Versions of "Yield" attempted to tame hairy control structure

Various versions of Yield attempted to tame "hairy control structure", but they still have many problems.

### double-*sigh*

Once again, I give up on attempting to communicate with you.

### re two-way coroutine

Brandon Bloom is talking about the ability to continue the coroutine with a value passed in from the outside. To the coroutine, it looks like it's reading input.

Request messages include parameters. Return messages include parameters.

Request messages include a "reply address" parameter; reply messages don't need one.

### To add two-way communication to the examples

Messages can have parameters, sure, but the ones we're talking about don't. Your "GET message" example involves a GET message with zero parameters (except the reply address). What would it look like to add a parameter to that?

Actually, I think the refactoring of Hewitt's examples would revolve around a pretty simple change: We would replace (Postpone ...) with an anonymous procedure ([x] → ...). But that would undermine the advice "don't use yield; use postpone":

• After this modification, the examples don't even use postpone.
• If the CPS wasn't apparent before, it is now. Why not add a language feature like yield to clean it up?

### Thunks don't implement "Postpone" (or "Future")

Thunks don't implement Postpone because they don't do at most once execution and they don't implement Future because they don't do concurrency.

### ActorScript Equality

How is equality defined in ActorScript. In the example at the top is simple states:

PostponedFringe.[aTree] = PostponedFringe.[anotherTree]


Yet this is not a straightforward equality as it is supposed to early terminate on the first difference to be efficient. Further '=' is not simple to define on all datatypes. Somehow we must decide on the correct equality to use for each type, and although builtin types might have builtin equality, it seems fairly obvious that any user-defined type will need to define its own version of equality.

How does ActorScript handle overloading of equality for user defined types?

What does the definition of equality look like for lazy lists?

### There are many equivalence classes for Actors

There are many equivalence classes for Actors.

ActorScript has a built-in equivalence class for its primitive types called "=". The rest of them have to be programmed.

### Equality on lists

So is the builtin list equality lazy or eager (assuming lists are a builtin), or is FutureList a separate builtin with its own definition of equality?

### FutureList is a different type than List

FutureList is a different type than List; both are built-in.

The implementation of each type must obey axioms for the type.

### Implementation

If the implementation of each type must obey the axioms for that type (which I think is entirely reasonable) then how do I define the implementation? Can I actually write a concrete first order model in ActorScript. For example how do I define concrete realisations of Successor and Zero so that I can implement natural numbers?

### Actors are *not* first-order

Actors are not first-order, but it is certainly possible to implement ℕ:

Interface ℕ with
zero?[ ]↦Boolean,
successor[ℕ]↦ℕ▮


### Realisability

My understanding is we cannot realise second order things, but we can realise first order things. So the above is a specification that is a single model of natural numbers up to isomorphism. However to actually implement natural number we must pick a representation. For builtin-types we might decide this when we write the compiler, so a natural number might be a string on 1 bits (succ) terminated with a 0 (zero). This is fine for builtins, but what if we build a user-defined model for something (that is not a simple composition of builtins) how do we realise a first order model that can be implemented from the second order model?

### Natural implements ℕ

The following implements ℕ:

Actor Natural[n:Integer]
extends ℕ using
zero?[ ] → n=0
successor[ ] → n+1


### FutureFringe can be higher performance than PostponedFringe

FutureFringe (below) can be higher performance than PostponedFringe (above) because producers can run ahead.


FutureFringe.[aTree:Tree]:FutureList◅String▻  ≡
aTree �
↑↑Leaf[aString] ⦂ [aString] ⍌
↑↑Fork[aLeft, aRight] ⦂ [⩛Fringe.[aLeft],  ⩛Future Fringe.[aRight]]▮