Call-by-what?

I know that this was discussed on LtU already, but as "Why Types" discussion demonstrated, people can always come with new ideas on old topics.

From theoretical point of view, call-by-value and call-by-name are dual.

Does it mean that in practice PLs should support just one of them, and get "two-for-the-price-of-one"?

Or does usability demand supporting them both?

Is this decision similar to supporting just And-Not as opposed to full set of (redundant) logical operations?

Does the decision depend on type system, or is fully orthogonal to it?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Call-by-what?

Does it mean that in practice PLs should support just one of them, and get "two-for-the-price-of-one"?

Or does usability demand supporting them both?

As I understand in practice many PLs use either call-by-name (Haskell) or call-by-value (ML, Scheme) as the base evaluation strategy and supplementary give some way to change its behaviour.

Is this decision similar to supporting just And-Not as opposed to full set of (redundant) logical operations?

If you give a programmer only And-Not, he will implement the full redundant set anyway.

Abstractions

If you give a programmer only And-Not, he will implement the full redundant set anyway.

Providing the language supports apropriate abstraction mechanizms. E.g., to implement usable And with And-Not (nand) one would probably need either functions or macros.

To implement CBN with CBV... Well, I am not sure what abstraction mechanizm is required, but it looks that at least first class continuations are needed. Or something of similar power. I probably need to revisit control handling primitives. PS: by the way, your name sounds familiar. Are we acquainted? :)

What's needed

The Wadler paper certainly assumes the presence of a sort of continuation, but note that there is no assumption that continuations can be stored, which means that continuations can only be used within lexical scope.

Scheme certainly has the resources to implement lazy evaluation; we've known this since the classical LtU papers; but actually making the kind of programming Haskell users take for granted (ie. the Bird-Meertens identities) is not trivial to support, and while the SRFI process has thrown up some nice work (SRFI-40 and SRFI-45), there's still some way to go. One thing is clear: powerful meta-programming facilities are essential to this kind of "cook it up in the libraries" approach.

And CBV with CBN?

Providing the language supports apropriate abstraction mechanizms. E.g., to implement usable And with And-Not (nand) one would probably need either functions or macros.
It will be too severe to give a programmer only minimal logical operations set without possibility for extending it. By the way, maybe better have one uniform representation of such abstractions which every programmer will otherwise implement by himself (it's maybe not so good idea for more complex language abstractions, where it worth to have different implementations for different purposes).
To implement CBN with CBV... Well, I am not sure what abstraction mechanizm is required, but it looks that at least first class continuations are needed. Or something of similar power.
It seems to me that for CBN with CBV implementation first class continuations will suffice, possibly supplemented with some technique for avoiding multiple expressions evaluation (graph reduction?) for efficiency reason.

What about CBV with CBN implementation? As I understand some way for forcing evaluation at the point will be needed. Also, there will arise possibility of unterminated expressions.

PS: by the way, your name sounds familiar. Are we acquainted? :)
You're sitting two floors above me :)

Scheme promises

It seems to me that for CBN with CBV implementation first class continuations will suffice, possibly supplemented with some technique for avoiding multiple expressions evaluation (graph reduction?) for efficiency reason.

Scheme's "promises" are more or less just closures with avoidance of multiple evaluation (usually implemented with a mutated flag associated with the promise). There's an introduction to them, demonstrating their use in creating infinite lists (as in Haskell) here.

More values can be obtained as a result of CBV than CBN?

Of all promises I prefer transients :) Probably I am too fascinated by Mozart/SEAM VMs...

Let me regroup and redeploy: if "more programs terminated under call-by-name than call-by-value", then what is the dual statement? More values can be obtained as a result of CBV than CBN? :)

And whether it makes sense to build analogy between CBN/CBV and Sheffer stroke/dagger (nand/nor), along the lines that (in some logics) one can define or, and, not, implies using one of them? Or are CBN/CBV more like underpowered and/or?

superiority vanishes

Filinski holds that CBV and CBN are dual wrt flaws. There are programs that terminate under CBV but not under CBN.

In ML terms, for the expression
let _=raise k in . . . loop . . . standard ML will avoid the path that would prevent a lazy ML (even if it had exceptions or a similar facility) from terminating.

In the frameworks of Filinski and Wadler, where CBV and CBN are seen to be dual, there are continuations. So evaluating an expression strictly can cause a transfer of control that rescues the program from an infinite loop.

Trying to think dually

What about CBV with CBN implementation? As I understand some way for forcing evaluation at the point will be needed. Also, there will arise possibility of unterminated expressions.

My vision is that (intuitively) CBV is going from the resources at disposal to the goal, while CBN starts with a goal, and looks for resources (ends/means duality). It is well known that it's possible for CBV not to terminate, "not finding the goal". I feelm it's quite possible for CBN not to terminate by "not finding the resources". I vaguely remember reading about lazy/eager products/coproducts, and how they influence termination under CBN/CBV. Went looking for the link...

[on edit: oh, but of course it was Declarative Continuations and Categorical Duality, cited in Wadler's paper as well. Got to re-read it... But first I need to understand Wadler's paper, it looks more promising :)]

You're sitting two floors above me :)
Now that's totally unexpected :) I have to google LtU for "Latvia" :)

Promise Keepers? :-)

Alexander Shumilov: It seems to me that for CBN with CBV implementation first class continuations will suffice, possibly supplemented with some technique for avoiding multiple expressions evaluation (graph reduction?) for efficiency reason.

Anton van Straaten: Scheme's "promises" are more or less just closures with avoidance of multiple evaluation (usually implemented with a mutated flag associated with the promise). There's an introduction to them, demonstrating their use in creating infinite lists (as in Haskell) here.

A couple of weeks ago, Tim Sweeney was kind enough to point me to this paper on the Cyclic Lambda Calculus. I suspect, but am not sure, that there's an analogue to Scheme's "promise" modulo multiple evaluation. In fact, upon reflection, that's almost certainly such a weak claim as to not be worth making!

Call-by: Both

get "two-for-the-price-of-one"?

In other words, since having one is isomorphic to having the other, does that mean that a language that supports either one is as capable as another language that supports both?

No, it does not follow. Consider that sum types are the dual of product types. The duality provides that a language with just sum types can be matched against a language with just product types, however a language with both sum _and_ product types is superior.

Note that in conventional logic you can get by with And-Not, leaving out Or. But it's intuitionistic logic that is analogous to types, and in that context Or is necessary.

Wadler's duality paper shows that on a foundation of sums, products, and first class continuations you can fabricate both by-name functions and by-value functions. In the paper, call-by-name and call-by-value are implemented on distinct (dual) reduction engines. This obscures the fact that call-by-name and call-by-value can coexist, as they do in Filinski's Symmetric Lambda Calculus.

Does the decision depend on type system, or is fully orthogonal to it?

Functions are at heart of all languages. By-name functions have a different type from by-value functions in the sense that they are constructible out of sum, product, and continuations in different ways. The choice of conflating the types or distinguishing them would be a significant and interesting aspect of the language.

Call-by-send

Recently I became interested in pi calculus and its friends (like join calculus and explicit fusion calculus). Encodings of both call-by-value and call-by-name in pi are well-known. Also, algebraic types (complete with sums and products) can be easily encoded as well (a.k.a. Church encoding). First class continuations correspond to pi channels (names, ports, locations). Types of names can be inferred statically. Does it mean that some variant of pi plus a powerful macro system (for expressing all these encodings) will be the next uber language (as Lisp was/is)?

As a bonus, you get a very native concurrency/distribution support. And the core calculus is so small, it can be supported by hardware. Pi machine, anyone? Matt? :-)

Pi, Call-by-push-value

The pi-calculus is essentially in CPS so the fact that it has first-class continuations and can encode things that can be encoded with continuations is not very impressive. In other words, having a powerful macro system that macro expands into pi is not very different from having a powerful macro system that macro expands into any arbitrary intermediate language.

Another calculus that may be worth looking at is call-by-push-value.

Exactly

But you forgot the concurrency bonus. Or is it imaginary?

Thanks for a "call-by-push-value" pointer, it's a really interesting stuff.

Actually I ignored it.

Actually I ignored it.

If we look at Lisps and their relationship to their macro systems, we see that the base language is powerful, flexible, and useable on its own. We do not see a CPS intermediate language with macros layered over it to make it (more) useable. Such an approach would have benefits: call/cc would be trivial, call-by-name or call-by-value would just be different parameter passing conventions, etc. However, it also has drawbacks: CPS is fairly low-level and overdetailed (e.g. evaluation order is strictly specified). This constrains implementors and makes it more difficult to perform analyses and generate efficient code.

The pi-calculus has the same benefits and problems, but the problems are more dramatic. It's non-deterministic, implies a concurrent model, and further implies a particular model of concurrency. There are (other) high-level concurrent primitives that may or may not map nicely to the pi-calculus and almost certainly can be optimized better than their pi-calculus encodings in some situations (it can't be worse). To give an immediate example, much like most Scheme code is not in CPS but you in essence drop into the CPS sub-language with call/cc, most concurrent code is a collection of sequential processes. It is unlikely an implementation will be able to recover the sequential parts in general. Also, the choice limits the applicability of the language; I can always add concurrency primitives to a language, but I can't take them away (without creating a new language).

The concurrency "bonus" is mostly irrelevant, hence why I ignored it. The pi-calculus doesn't have a bonus compared to concurrent high-level languages; it has the same relationship to (some of) them as say CPS does to Scheme. The question is simply whether it's a good idea to design a language this way. I'm not aware of any successful general-purpose language that has taken this approach (certainly though implementations have). If the answer is yes, it also begs the question, "Why not be more extreme?" While the most extreme approach along these lines (powerful macros over assembly) may be a bit too extreme, one could certainly go lower. Then there's also the issue of whether it wouldn't just make more sense to compile to the intermediate language.

Ever since Milner introduced the pi-calculus...

The pi-calculus doesn't have a bonus compared to concurrent high-level languages; it has the same relationship to (some of) them as say CPS does to Scheme.

This reminds me of the introduction to HOSC 11.2:

Ever since Milner introduced the pi-calculus, it has been part of the folklore in the continuation community that translations of the lambda-calculus into the pi-calculus look like CPS transformations. In his article "The pi-calculus in direct style," Gerard Boudol formalizes this folklore. He establishes a bridge between CPS and the pi-calculus, and presents the "blue calculus," a direct-style model for higher-order concurrency with the same expressive power as the pi-calculus but with a better notational convenience, similarly to the usual lambda-calculus in direct style compared to CPS.
Unfortunately, The pi-Calculus in Direct Style didn't rise any interest on LtU.

Ack!

Andris Birkmanis: Unfortunately, The pi-Calculus in Direct Style didn't rise any interest on LtU.

On the contrary, Andris! I'm keenly interested—merely incompetent to comment. Thanks so much for the food for thought!

Which reminds me that this weekend I need to finish my ATITAPL review...

Quick review - corrections welcome

I'm keenly interested—merely incompetent to comment.

My competence is limited to having read about a dozen of lambda papers, and about half that much pi papers. So I am by no means qualified to review, caveat lector :-)
[on edit: I moved the review to the paper's thread]

Linear computer programs

There's a nice parallel with methods used in numerical algorithms. Consider computer programs that are linear in their arguments ('linear' as in linear algebra). Applying a function is just matrix multiplication. Given a matrix multiplication, eg. ABC, associativity means we can evaluate this as (AB)C or A(BC). The former can be thought of as call-by-name and the latter call-by-value (or vice versa depending on conventions). These are dual to each other because duality, in this case, means transpose and tr(AB)=tr(B)tr(A). This looks trivial but actually the dual linear program can be non-obvious and useful. A nice example of dual linear programs are the two algorithms for automatic differentiation: the forward and adjoint methods. (Eg. see http://users.info.unicaen.fr/~karczma/arpap/revpearl.pdf) Other examples are in image processing where linear operations are often perfomed in interesting non-trivial ways. (More examples: http://sepwww.stanford.edu/sep/prof/gee/toc_html/index.html)