Non-Applicative Functional Languages

I am considering writing a paper about non-applicative functional language with the following abstract:

Non-applicative Functional Languages

Most of today's functional languages are examples of applicative languages. In other words
the operation of function application is generally implied between two consecutive terms.
This paper explores what it means for a language to be non-applicative, and what the implications
are in terms of basic operations such as the Y combinator.

Is this a topic that has already been explored? I only found one reference to "non-applicative languages" on Google scholar. Is anyone else interested in this topic? Any suggestions on how to make the abstract more appealing?

Thanks in advance.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

John Cowan

Well, concatenative languages (Forth, Joy, Factor, Cat, and others) are definitely non-applicative.

Concatenative == Non-Applicative

My work on Cat is what is inspiring me to consider writing such an article. I haven't found any papers in the scholarly literature that formally discuss the differences between applicative and non-applicative languages. Do you think there would be some benefit of a formal comparison between the two classes of languages?

Isn't this mostly syntax?

Writing "foo bar baz" in ML-like languages assumes left-to-right application, but you could simply put in parens "foo (bar baz)" for the effect of composition. In APL "foo bar baz ..." would be read as composition. Scheme requires the parens in either case, but it's equally easy to write (foo (bar (baz ...))) or (((foo bar) baz) ...).

... and have a look at the hooks, forks, and combinators of J.

Josh

Definitions

I can't tell whether the syntactic issue is what cdiggins is after, but the definition of "applicative language" given in the topic is misleading. The more usual definition is something like this:

A programming language in which functions are repeatedly applied to the results of other functions and, in its pure form, there are no statements, only expressions without side effects.

(from here.) Of course, that looks a lot like a definition of FP, and Foldoc concurs.

Backus

Thanks!

Thanks! I have some more studying to do!

My reading of that article is that...

Functional programming = Applicative programming
Function-level programming = Point-free programming

Is that a correct reading?

I always understood applicative programming as a synonym for functional programming so I was very confused by the original question.

Roughly, yes.

The "algebra" of programs is also a very strong aspect of Backus' view of FP. Concatenative languages tend to be extremely easy to formally manipulate. Part of the allure/reason for point-free style.

The "algebra" of programs is

The "algebra" of programs is also a very strong aspect of Backus' view of FP. Concatenative languages tend to be extremely easy to formally manipulate.

Interesting! My problem with point-free style is that its comprehensibility doesn't seem to scale. That is, saying "avg = sum / count" is fine, but once you're doing anything complicated, I find it appallingly confusing. For example, the APL family impresses me greatly in terms of that "easy to manipulate" quality, but it looks like line noise to me. I am not referring to the character set issue; the same thing comes up in J and K.

I find that names have a strong documentary value. I wonder, can it be possible to achieve the same ease of manipulation, at a cost of some conciseness, by retaining names?

Any comments or suggestions for further reading are appreciated.

line noise

There is that. Consider this line from the J entry to the 1999 ICFP contest:

dedge=: >: @: (+/~) @: (i. <. i.@-)

and their explanation, "The definition of dedge may be read as increment (>:) the addition table (+/~) of the minimum (<.) of the lists of increasing (i.) and decreasing (i.@-) non-negative integers."

Suppose instead it had been written

WeightsTable = 1+ (table +) min(ints, rev ints)

(changing the implicit operation to composition and naming a few of the other symbols)? I think that it wouldn't take all that much to make it as readable as the equivalent code in any other language.

Josh

The algebra of programming

Regarding the point-free style, you should look at Richard Bird and Oege de Moor's book The Algebra of Programming. That's based around relations rather than functions, and the inherent non-determinism discourages the use of variable names. Oege and I wrote a paper Pointwise Relational Programming (for AMAST in 2000) about trying to reintroduce the names, which are valuable both for documentation and for plumbing. It's tricky, though, because you lose beta-equivalence: (let x = 1 [] 2 in x+x) is not the same as ((1 [] 2) + (1 [] 2)), where "[]" denotes choice. Pointless programming is less of a problem functionally; see for example A Pointless Derivation of Radixsort, in which dispensing with variables simplifies the calculations considerably.

End of self-plug.

re the radix sort paper: see

re the radix sort paper: see here (and the linked to discussion).

Interesting! My problem with

Interesting! My problem with point-free style is that its comprehensibility doesn't seem to scale.

I agree. I believe point-free languages are most useful as intermediate languages.

The idea of Cat (a point-free stack-based non-applicative compositional language) was to be a back-end for imperative or functional languages, which could be easily extended and understood by human programmers.

What sets Cat apart from other intermediate languages (apart from a readable syntax) is that the fundamental operations are first class values. They can be quoted and passed to each other, making it a full-fledged functional language (according to some definitions).

I find that names have a strong documentary value. I wonder, can it be possible to achieve the same ease of manipulation, at a cost of some conciseness, by retaining names?

Yes, but at one point you will probably want to discard names if you want to express some kind of rewriting grammar (e.g. for optimization). The problem with retaining names all the way down to the lowest representation is that any optimization has to be expressed in terms of an abstract tree manipulation. This can be done, but it is hard to express, and becomes very coupled with the abstract representation of the language. In other words, it is bound to a specific implementation of a specific langauge. Going to a point-free form before optimization or transformation means that rewriting rules can be performed on the concrete syntax of the point-free language.

Someone please jump in if I am making too many unsubstantiated claims, or they feel I am mistaken.

jumping in (not pushing or popping)

What sets Cat apart from other intermediate languages (apart from a readable syntax) is that the fundamental operations are first class values. They can be quoted and passed to each other, making it a full-fledged functional language (according to some definitions).

Both CPS and ANF intermediate forms are full-fledged functional languages [edit: link], with readable syntax (CPS is readable with practice and/or some syntactic sugar). At a slightly lower level, bytecode-based intermediate formats usually have readable assembler-like syntaxes, and such formats for functional languages are almost always higher-order (see e.g. the MacScheme machine). It would help to have some more specific examples of what you're comparing Cat to.

Yes, but at one point you will probably want to discard names if you want to express some kind of rewriting grammar (e.g. for optimization).

Why? See our previous discussion.

The problem with retaining names all the way down to the lowest representation is that any optimization has to be expressed in terms of an abstract tree manipulation. This can be done, but it is hard to express,

More initial implementation effort is involved in performing transformations on trees. So the question is really whether that extra effort, which has to be invested by the language implementor, is worth it. I think there's quite a bit of evidence indicating that it is.

and becomes very coupled with the abstract representation of the language. In other words, it is bound to a specific implementation of a specific langauge.

This doesn't make sense to me. The transformations you're describing are as bound to the point-free language as any transformations are on any language.

Going to a point-free form before optimization or transformation means that rewriting rules can be performed on the concrete syntax of the point-free language.

I don't think that's the issue. "Concrete syntax" would include characters like "[" and "]" (in Joy/Cat), which delimit what become quotations in the abstract syntax. I doubt there's any benefit to having transformations operate at that level -- they can just as well be performed on abstract syntax. It's just that the abstract syntax of stack languages has a simpler, more linear structure.

Someone please jump in if I am making too many unsubstantiated claims, or they feel I am mistaken.

See above. :) I think Qrczak's comment from the earlier discussion also needs to be addressed: stack-based languages don't seem to be a very good semantic match for imperative or functional source languages, or for real target machines, or for traditional VMs, so any benefit in ease of performing transformations in a stack-based language seems likely to be moot.

If you're saying that Cat would make a good VM for imperative or functional languages, the best way to demonstrate that would be to implement a simple language on top of Cat. Then the issue would become how the VM compares against VMs that have been designed around other criteria, which usually include efficient execution of the source language.

BTW, if I seem negative about this, I should mention that I do find the relationship between the lambda-based models and linear models interesting. However, my interest is more at the theoretical level in this case, because I'm not seeing the specific practical advantages you're claiming.

Both CPS and ANF

Both CPS and ANF intermediate forms are full-fledged functional languages [edit: link], with readable syntax (CPS is readable with practice and/or some syntactic sugar). At a slightly lower level, bytecode-based intermediate formats usually have readable assembler-like syntaxes, and such formats for functional languages are almost always higher-order (see e.g. the MacScheme machine). It would help to have some more specific examples of what you're comparing Cat to.

Many of the most popular intermediate languages in use today e.g. JVML, MSIL, x86 assembly, and C aren't higher-order. I'll take back what I said about readable syntax though, since readability is such a subjective idea.

More initial implementation effort is involved in performing transformations on trees. So the question is really whether that extra effort, which has to be invested by the language implementor, is worth it. I think there's quite a bit of evidence indicating that it is.

You are saying that there is quite a bit of evidence to suggest that transformations on abstract trees is somehow better than transforming a flat format? I am unfamiliar with such evidence.

This doesn't make sense to me. The transformations you're describing are as bound to the point-free language as any transformations are on any language.

Sure, but my goal (which I am still a ways from achieving) is to find a common low-level language (e.g. Cat) to which other languages can be easily converted to. I don't see why manipulating a tree with names, could be advantageous to manipulating a point-free format. The rewriting algebra for compositional languages for Joy and Cat is trivial ( http://www.latrobe.edu.au/philosophy/phimvt/joy/j04alg.html ) and very general.

My thinking is that one could express a generic term rewriting optimization system using a similar grammar, and then apply it to languages X, Y, and Z. The requirement being that those languages have to first be converted to a compositional point-free form first. This alone seems to be fairly strong argument as for the usefulness of the compositional point-free form itself.

... stack-based languages don't seem to be a very good semantic match for imperative or functional source languages, or for real target machines, or for traditional VMs, ...

I don't understand. MSIL, JVML, x86 assembly, C, etc. are all examples of stack-based languages. What do you mean that they aren't good semantic matches for those various sources?

I should mention that I do find the relationship between the lambda-based models and linear models interesting

Have you had a chance to look at my most recent version of the Cat paper? I revised it again yesterday, and it is nearly submission-ready (I'm planning on submitting to ICFP). I'd be very interested in your comments.

You are saying that there

You are saying that there is quite a bit of evidence to suggest that transformations on abstract trees is somehow better than transforming a flat format? I am unfamiliar with such evidence.

I'm saying that there's a great deal of evidence of the power of using lambda-based intermediate languages, particularly because of the transformations that they support. The SSA intermediate form used in compiling imperative languages also relies on the same properties.

The people who did this work were certainly aware of stack-based alternatives. It's not uncommon for virtual machines to be derived from such intermediate languages, and such machines are often at least partly stack-based. High-level transformations in such cases are done at the level of the lambda language, not at the VM level. The reason for that is that the intermediate language is designed primarily to support transformation and analysis, without irrelevant semantic artifacts, and the VM is designed to support important runtime properties, including efficient execution.

A language like Cat combines some properties of a good high-level intermediate language (supporting high-level transformations), as well as some properties of a VM. This could potentially be a good engineering compromise, but one of my concerns about it is that high-level ILs and VMs have competing constraints. Using the same language for both purposes will force you to make more compromises than if you separated the two.

If you do separate them, then you're free to design an ideal IL and an ideal VM for your purposes. I definitely don't see a Cat-like language as being an ideal IL for traditional functional language, and it probably isn't ideal for imperative languages either (more on this below). I'm sure it can be made into a good VM, although I suspect that you'll want to add optimizations to support source language characteristics, similar to those found in the JVM or MSIL, moving away from the pure stack model and towards more traditional VMs.

Sure, but my goal (which I am still a ways from achieving) is to find a common low-level language (e.g. Cat) to which other languages can be easily converted to.
[...]
My thinking is that one could express a generic term rewriting optimization system using a similar grammar, and then apply it to languages X, Y, and Z. The requirement being that those languages have to first be converted to a compositional point-free form first. This alone seems to be fairly strong argument as for the usefulness of the compositional point-free form itself.

It's at least an equally strong argument for CPS or ANF-based intermediate languages, which make excellent candidates for this purpose. See e.g. Realistic Compilation by Program Transformation, which describes a system that compiles BASIC, Pascal, and Scheme, through an intermediate CPS languages, to native code. VMs are easily derived from such ILs.

PLT Scheme has implementations of Java and Python which compile to either Scheme or their core lambda language (not sure which), and their IDE takes advantage of the common intermediate representation for visualizing program structure during development and debugging. In that case, the presence of local names in the IL is critical.

But the problem with a common low-level language is not so much finding a common representation for the basic computational features of multiple languages -- I'd say that's pretty much a solved problem, with various credible solutions. The real problem is dealing with differences in things like data types, object systems, etc. That's one of the biggest reasons that the JVM and CLR don't make such great general-purpose language targets, even though the CLR was specifically designed for that purpose. The papers on C-- cover some similar issues.

I don't see why manipulating a tree with names, could be advantageous to manipulating a point-free format. The rewriting algebra for compositional languages for Joy and Cat is trivial ( http://www.latrobe.edu.au/philosophy/phimvt/joy/j04alg.html ) and very general.

The comment of Qrczak's which I linked to earlier gets into this. Any stack-related artifacts in the IL will be detrimental to suitability as a language for analysis and transformation.

I don't understand. MSIL, JVML, x86 assembly, C, etc. are all examples of stack-based languages. What do you mean that they aren't good semantic matches for those various sources?

I should have written "concatenative" (substitute equivalent term to taste), not stack-based. MSIL and JVML both have all sorts of optimizations for the source languages they're designed to support, e.g. to do with local variable stack frames, and as you know are certainly not concatenative. x86 assembly isn't a good semantic match for anything. :) More seriously, x86 isn't relevant, because no-one uses it by choice as an intermediate format, and besides it's also heavily register-based. C is about as stack-based as the lambda calculus, so clearly I should have been more specific.

Have you had a chance to look at my most recent version of the Cat paper? I revised it again yesterday, and it is nearly submission-ready (I'm planning on submitting to ICFP). I'd be very interested in your comments.

I looked at the first draft which you posted about here. That seemed mostly focused on the type system, which doesn't interest me as much. If that's changed, I'll take another look.

Food for thought

Thanks for sharing your ideas and knowledge Anton.

I looked at the first draft which you posted about here. That seemed mostly focused on the type system, which doesn't interest me as much. If that's changed, I'll take another look.

I now describe the operational semantics in the paper, I no longer talk about the S-K caclulucs, and discuss other aspects of the language.

Interpreting the question

Re the original question, I think it intended to refer to the issues mentioned in the Joy FAQ, of applicative vs. "compositional" languages. Here's a relevant sentence:

Any functional language which completely replaces application, by explicit or implicit "@", by composition, with explicit or implicit "." or ";", might be called a compositional functional language.

"Composition" here refers to the idea that all program terms in Joy denote functions from stacks to stacks, and a program is simply a composition of such functions.

So the comparison is between traditional applicative languages in which programs consist of "functions repeatedly applied to the results of other functions" and compositional languages in which programs consist entirely of functions composed with other functions.

Can Programming Be Liberated from the von Neumann Style?

Not listed on that Wikipedia page, but very on-topic: Can Programming Be Liberated from the von Neumann Style?