What are the properties of "Functional Programming Languages"?

1 - Please let standardize the dynamically-typed, statically-typed, weakly-typed and strongly-typed definitions. We have 2 dimensions here:
a) dynamically-typed statically-typed
and
b) weakly-typed strongly-typed

strongly-typed: Every value has a type and can not implicitly be converted.
weakly-typed: Every value has a type and can implicitly be converted.
statically-typed: Every value has a type at compile time.
dynamically-typed: Values has not a type at compile time.

Haskell, C#, Java are statically-typed and strongly-typed.
Ruby, Python are dynamically-typed and strongly-typed.
C is statically-typed and weakly-typed.
PHP is dynamically-typed and weakly-typed.

(Unfortunately even in MIT Schema description (PDF) this concepts are used misunderstoodly)

(Let's for now forget about things like Linear typed languages.)

2 - Now in every of mentioned groups, which features must be implemented for that the language could be assumed as a functional one?

For example JavaScript and Ruby are dynamically-typed. So things like pattern-matching are meaningless in their scope (or have a very different meaning).

Is "purity" really needed in a language to be considered as functional programming? Personally I do not think so. Because I think functional programming is about higher-order composability. For example structural-composability gives you the power to make more pluggable codes.

So again the question:
What are the properties of "Functional Programming Languages"?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Link or it didn't happen

Questions like this don't have definitive answers, because there are many different kinds of thing which qualify as functional programming, in different contexts. If you have a specific meaning in mind, then qualify the term accordingly, e.g. using "pure", "typed", "higher-order", etc.

Someone suggested not long ago that a good indicator of a useful forum topic here is that it should contain at least one external link. Rather than recapitulate discussions that have occurred elsewhere many times (mostly outside of LtU), why not search out some of those discussions and post links to some conclusions here?

Purity

Thanks for replying.

Maybe you are right about that I must insist on some specific aspect of the question. So I do that:
My main concern is purity. I want to know what exactly is the value of purity "IN ACTION". I'v read a lot mathematical texts about it. But look at Ruby for example. It can do real things for a daily-developing life. This is hard to catch on in a pure context like Haskell. I can see what is Ruby doing with Rails. Or I can see what Erlang has to present (for a specific kind of tasks).
But in none of those texts about purity I have not seen what/where is the value. In this case the "WHY" clause is still very mental/mathematical I think.

(If you think this discussion is not useful, delete it. Maybe I am still too newbie to ask or I came to the wrong place. After all asking is the joy of life!
Cheers!)

Why not?

We recently had a longish thread On the Importance of Purity. Perhaps the blog post linked there, or the resulting discussion, will help.

As far as the usefulness of the discussion, my own opinion is that you will learn much more, much faster by actually trying to learn a pure language like Haskell, than by asking questions about purity from the "outside".

With some experience, you might find that the question that you ask about purity is "why not"? Of course, there are valid answers on either side, but with some experience you'll be better equipped both to ask and answer the questions.

Types have little to do with it.

Features like the choice of type system are irrelevant to identifying functional programming languages. The minimum requirements for a programming language to be considered functional are explicit support for lambda abstraction (i.e. anonymous functions with variable capture) and the availability of all functions (anonymous or not) as first-class values.

  • Haskell is a functional language because it has lambda and first-class functions.
  • C# and Java are not functional languages because they have neither lambda nor entirely first-class functions (although C# has delegates).
  • Python could reasonably be considered a functional language since it has first-class functions and a (slightly crippled) lambda syntax.
  • Ruby could also possibly be considered a functional language since it has both explicit support for lambda abstraction and a sort of first-class functions, although its function objects are not transparently callable as functions.
  • C has first-class functions, but not lambda, so it is not a functional language.
  • PHP has a lambda-like facility (create_function) based on string evaluation, but it lacks variable capture and its functions are not really first-class values (PHP functions are accessible only by name; create_function generates functions with unique names rather than truly anonymous functions), so it falls short of being a functional language.
  • MIT Scheme is a functional programing language because it has first-class functions and explicit support for lambda.
  • JavaScript is a functional programming language because it has a lambda syntax and its functions are directly available as values.

C# has perfectly complete

C# has perfectly complete support for "lambda abstraction" and first class functions. It has since at least C# 2 and has made it even nicer in C# 3. Despite this, I do not consider C# to be a functional language. Furthermore, there are first-order languages that I would consider to be functional such as Sisal and Single-Assignment C.

Hm, you're right -- I'd

Hm, you're right -- I'd underappreciated C# delegates since I hadn't seriously used C#. SA-C is also a good counterexample since it is widely billed as a functional language.

That being the case, what are your criteria for a functional language? It's difficult for me to think of a set of criteria which simultaneously admit languages like SA-C and SML while excluding languages like C# or even ISO C.

*EDIT*: Well, actually I can think of a pretty good discriminator: the language is designed to facilitate programming with side-effect-free functions.

How about...

How about Imperative vs. Declarative, with the later being functional?

Obviously, in addition to the above criteria.

It seems like an avoidance

It seems like an avoidance of side-effects implies a declarative style. Can you think of a counterexample?

Declarative

Functional programming is one form of declarative programming. But declarative also encompasses the logic languages like Prolog and Oz.

Re: Declarative

Indeed, however, the ideas of first class functions and lambda expressions distinguish between Functional Programming and other forms of Declarative Programming.

Together, the three components define a functional language, albeit, a pure one (Arguably, scheme/other lisps would not be considered functional, as they aren't fully declarative).

It's already been pointed

It's already been pointed out that are first-order functional languages which by definition lack first class functions (and therefore lambda). Perhaps the distinguishing feature (versus other declarative approaches) is evaluation as lambda-calculus-style function application?

Lambdas are insufficiently necessary

I take it you don't consider Joy to be functional? I don't recall C having first class functions, either - function pointers are not the same thing. I don't consider "first-order functional language" to be an oxymoron, either.

That's a good point too. An

That's a good point too. An even more painful counterexample than Joy might be SKI calculus (I might as well have ruled out lambda calculus). Obviously my post wasn't very carefully thought out.

For my own benefit, are C functions not first-class because they can't be created dynamically?

They're not first class (or

They're not first class (or more fully, "first class values") because they're not values.

Functions versus function pointers

That is, functions are distinct from function pointers (which do happen to be values).

I was going to disagree, but in the course of writing my reply I realized that the relationship is almost exactly the same as that between arrays and regular pointers. I don't think anyone could seriously argue that C has first-class arrays.

Thanks for your patience.

so whats the difference

so whats the difference between this and ... say reified continuations in scheme?

If you accept that continuations in scheme are first class, then you would have (IMHO) have a rather hard time to argue that functions in C are not first class. (otherwise, where is the difference between call/cc reifying continuations in scheme and the adress operator (&) reifying functions in C)

PS: and yes, C has first class arrays ... they are just not very safe to use ;)

PS PS: one could even argue that C has closures ... every function closing over the global environment ;)

For something to qualify as first-class

...I'd say it must meet two requirements:

1. You must be able to create/define it anyplace, without restrictions (just like an integer value).(*)

2. You must be able to pass it from the point of creation to any other place and use it there without restriction (just like an integer value).

From this it follows that neither functions nor arrays are first-class in C, because they violate both these requirements. Procedures in Pascal aren't because they violate the second (cannot be passed out of scope). Objects of inner classes in Java aren't, because they violate the first (may only refer to final variables of enclosing scope).

(*) This leaves some headroom wrt syntax, e.g. having only named declarations instead of anonymous expression forms is OK, as long as the language provides some let-like expression form to embed them.

So we disagree about your

So we disagree about your first point ...

First class in my opinion: can be *treated* like any other value ... stored in variables, passed to and returned from functions.

Do you have any reference which states the 'can be created anyplace' condition, as this is the first time i hear it.

Also some questions to understand what you mean by create/define: how many one's (1) do you normally create during the lifetime of your programm? Are literals in your language created, or are they just *there*? Are you defining variables or values in your languages?

# first class procedures,

# first class procedures, which are normal data objects in the language,
# higher order procedures, which can take procedures as arguments and return them as values, and
# anonymous procedures, which can be created and referred to via pointers, without giving them names.

References

Do you have any reference which states the 'can be created anyplace' condition, as this is the first time i hear it.

The wikipedia entry for first class object gives a nice list of possible characteristics for first class values. It also cites Strachey as the originator of the term.

Debates about this term happen from time to time - see e.g. this message from a comp.compilers thread in 1992, which references another thread from 1991. The reason for these debates is that the meaning of the term is dependent on the language(s) and feature(s) being examined or compared, which leaves room for people to argue about how the term should be applied.

E.g., it's true that first-class continuations in Scheme don't have a literal representation. However, such a representation doesn't make much sense for continuations, so that doesn't preclude them from being considered first-class, particularly when compared to continuations in languages which don't support continuations as values in any form.

By contrast, a literal representation for functions is standard in most languages. Because of this, not being able to use that representation on the right-hand side of an assignment statement is in itself usually considered enough to disqualify the language from having first-class functions. Having function pointers doesn't affect that, since e.g. the language doesn't require you to use pointers to named integers in order to manipulate them.

OK, i think we can argue

OK, i think we can argue about this quite a *long* time ;)

Trick questions:

1. is the empty list first class in scheme? (there is only one empty list in scheme, no way to 'create' a second one ever!)

2. are input and ouput ports first class in scheme?

forgot about another one:

forgot about another one: are *mutable* pairs first class in r5rs scheme? ;)

Point?

OK, i think we can argue about this quite a *long* time ;)

Probably only if someone is trying to argue for the sake of arguing. What is it that you're trying to show?

I'm not the one who started

I'm not the one who started arguing about functions not being first class in C ...

can you answer my questions?

(I dont want to show, but I want to understand your point of view ...)

Function pointers get you partway there

Your questions seem to be based on the assumption that the number of instances of a type, or its means of construction, has something to do with the issue, but that's not the case. I think that the use by Andreas of "create/define" may have been slightly misleading; "create" and "define" don't quite apply to some kinds of values, like integers[*] or the empty list, but that doesn't really affect the definition. Both the empty list and I/O ports in Scheme are indeed usually considered first class.

An aspect of the problem with functions in C is that although there's a syntax for creating them, you can only use that syntax in a certain special context, and can't use it e.g. in an argument to another function, or on the right hand side of an assignment statement. Because of this, functions in C are not considered fully first class.

The comp.compilers message I linked to gave this case as an example: "If a language allows values to be used directly (i.e. without being named), then any value that the language requires to be named cannot be called a first-class value." That message also provided a more general definition, which says in part that "the language must allow the value to appear in any syntactic construct in which other values may occur".

I'll repeat, though, that the meaning of the term "first class" is dependent on the language(s) and feature(s) being examined or compared. It's probably better to think of it as a relative property than an absolute, binary one. Having function pointers in a language makes functions closer to first class than they are in a language with no way to reference functions as data, but that alone doesn't make them fully first class, as they are in a language which allows literal function definitions to appear as arguments to functions.

[*] edit - was "integer literals", thanks to Philippa for the correction.

Pedantry

Literals aren't values, they're syntactic representations of values.

The empty list is a particular value, and it belongs to a type of value (symbols) that are first class - nil isn't magic, it's just the particular value that everything uses to denote an empty list. You could, however, write your own versions of everything that used another symbol instead of nil.

I think not being able to create new functions at runtime is a pretty good indication of second-class status!

Not sure about "dynamic"

The pedantry is much appreciated, and I've corrected the comment.

The empty list is a particular value, and it belongs to a type of value (symbols) that are first class

Some only slightly on-topic pedantry: that may be true of some varieties of Lisp, but not of Scheme. In Scheme, 'nil' is not standard. Instead, in R5 and R6 Scheme, "the empty list is a special object of its own type", represented syntactically as '(), and associated with a predicate 'null?' which returns a boolean value of true when applied to the empty list.

But whether a type is inhabited by a single value, or two values (like booleans), or an infinite number doesn't matter, and is really a red herring related to the first-classness issue.

I think not being able to create new functions at runtime is a pretty good indication of second-class status!

That's actually an interesting point. I notice that the Wikipedia entry for First class function uses similar logic, saying that C and Pascal "are not considered to support first-class functions since, in general, functions cannot be created dynamically during the execution of a program."

But is dynamic creation really the issue (or alternatively, what do you mean by dynamic creation)? Imagine if C let you define anonymous functions anywhere, but still didn't support lexical closure. Those functions would not need to be created dynamically, but they would be at least as first class as integers.

Don't we all love category theory

The way category theory supports "first class functions" might be illuminating. For a cartesian closed category, of which the simply typed lambda calculus is its internal language, "first class functions" are supported simply by the existence of the natural isomorphism curry/uncurry*. This leads to a very simple and objective rule for determining whether a language supports first class functions. If you can write curry and uncurry in it, then your language supports first class functions, perhaps so inconveniently as to be useless, but...

You cannot implement curry and uncurry in C.

* and a tiny bit more, namely the action of the Ax- functor on objects and arrows and the action of the A→- functor on objects, all of which should be easy for any language even hoping to support "first class functions"

In particular, viewing

In particular, viewing currying as a sort of 'staging' mechanism is good enough - you don't need full lexical capture (it's a little difficult in a point-free environment, after all), just a means of passing in a value (computed at runtime) before the eventual call site. Requiring the isomorphism with uncurry going the other way is just a precise way of saying "and don't screw around!".

We might get a better operational intuition by looking at a more typical concatenative/compositional language, but then that's exactly what category theory's there to avoid. Working out which intuitions still hold once we sufficiently abstract a few of the terms in them can be rather illuminating.

More pedantry games

Anton van Straaten: I think that the use by Andreas of "create/define" may have been slightly misleading

Agreed. I was struggling to find a more general wording, but failed. However, you could take the view that, semantically, an integer literal is merely an expression that evaluates to an integer value - i.e. creates one at runtime (notwithstanding more optimized translation schemes usually used by compilers). In fact, some language definitions take that view.

The comp.compilers message I linked to gave this case as an example: "[...] any value that the language requires to be named cannot be called a first-class value."

IMO naming, or the existence of anonymous forms, is a red herring. That's merely syntax, not semantics. Assume SML didn't have its lambda form ("fn"). You could still define it as a local syntactic abbreviation using "let fun" (assuming it wasn't already done the other way round). According to Felleisen's definition of expressive power, you haven't lost anything. Hence it would be weird to suddenly consider functions second-class in that variant. The semantics of functions haven't changed.

ANF et al, anyone?

Additionally, some languages intentionally require you to name everything. I guess second class is the new first class.

Context!

In a language where everything is named, requiring that something be named doesn't make it second class. There's no escaping the fact that "first class" is a relative term, that's relative not only to a language but to specific properties of the features it's being applied to (such as how instances of a particular type are constructed, as discussed earlier).

What languages would that be?

What languages would that be?

The obvious rule of thumb is

The obvious rule of thumb is that something is a 'first class value' if it's not a 'second class value' (cf 'second class citizen'), meaning that you can do whatever you expect to be able to do with other values. In that sense, continuations are first class citizens once created in Scheme and call/cc is the only way to create them that makes sense so they're as first class as it gets. Whereas in C I can't compose functions with the ease that I can in a functional language because I can't ever create a new function at run-time - yet I can build values from other values easily enough in most cases.

The distinction between a function and a function pointer in C

isn't that significant. Modern dialects of C (and all flavors of C++) permit arbitrary and seamless conversion between functions and pointers to functions; no need to use address-of and dereference operators on function pointers. After all, in ML when one passes one function into another; one really passes in a reference and not a block-of-code-as-value; the syntax just hides the distinction. For immutable things, a reference is as good as a copy.

Where C/C++ both fall flat is in the lack of lambda expressions, or the ability to declare nested functions at all. All functions must be in global scope in C; all have access to (close over) the global environment, but functions cannot be defined in functions. C++ allows functions to be defined within objects. Classes may be defined within functions, so some "nesting" can be accomplished with a little trickery--but these don't close over the enclosing environment. One can get closer still in C++ by making all functions objects. C++ also provides another way of emulating HOFs, with operator (). The reader is referred to the documentation of boost for lotsa examples.

Of course, many other features of C/C++ reduce the need for lambdas. In the LC, the only way to introduce a new binding is in a lambda expression; in FP languages which are designed as syntactic sugar over the LC, declaratives like "let" and "letrec" are lambdas under the hood. C++, and modern versions of C, let you introduce new bindings into the current scope at any conveninent point, so there is no need for lambda for this purpose. Furthermore, there is a lot of folklore and experience among industrial programmers, especially back in the pre-OO days, that lexical capture of mutable variables (as opposed to capture of immutable bindings) is a Really Bad Thing To Do--see the classic Wulf and Shaw paper Global Variables Considered Harmful (not availble free online, those with ACM access can get it here) for more on that. On the other hand, that experience doesn't seem to deter zillions of programmers reared on Perl and Python...

Objects, of course, can subsume lambda and vice versa. Smalltalk blocks are technically not lambdas (Smalltalk doesn't have free functions, after all), but the use of "value" message selectors ("value" for 0-ary, "value:" for unary, "value: value:" for binary) as a default method name is ubiquitious. C++, as mentioned above, has "operator ()" for this purpose. Both C++ and Java are handicapped in this regard by incomplete variable capture for nested objects (especially C++), but the objects-as-environments paradigm manages to handle a large number of uses of variable capture. That said, IMHO true lambdas (or something pretty close) are a welcome addition to any OO language--saves typing, after all. :)

Smalltalk?

Let's add Smalltalk. It has lambda and explicit first-class functions within the notion of Block Contexts.

Other functional like characteristics, such as "currying", are easily added as had been done by Vassili Bykov and potentially others as well prior.

You can find Vassili's articles here:
Currying-in-Smalltalk
Currying-in-Smalltalk part 2

Now for sure there are significant differences between functional languages and object oriented imperative languages such as Smalltalk and Java. That difference is a good thing. While the principle of computational equivalence as expressed by Wolfram and Turning is valid it's not always efficient, effective, performant, practical or desirable to do so. Each of the various language paradigms have benefits to offer and mixing paradigms will sometimes afford additional benefits although there may be costs involved.

Ack!

That's Church and Turing, not Wolfram and Turing!

Ack! Negated!!! ;--)

Ah, not so fast with the "ack" there...

That's Church, Turing and - yes, now - Wolfram!

Wolfram's contribution to the concept of Computational Equivalence in a significant number of ways. For a full comprehension of these discoveries and proofs read the ground breaking work "A New Kind of Science".

A summary of Wolfram on this point says: "... the principle of computational equivalence says that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power, and that most systems do in fact attain this maximal level of computational power. Consequently, most systems are computationally equivalent. For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. Computation is therefore simply a question of translating inputs and outputs from one system to another."
Principle of Computational Equivalence

Here is the good bit that should take away your "Ack!":

"The Church-Turing thesis has been extended to a proposition about the processes in the natural world by Stephen Wolfram in his principle of computational equivalence (Wolfram 2002), which also claims that there are only a small number of intermediate levels of computing power before a system is universal and that most natural systems are universal."
Church-Turing Thesis"

I apologize for forgetting to mention Alonzo Church - who is considered the one of the fathers of Functional Programming. Of course if we mention Church we must also mention Stephen Kleene who developed Lambda Calculus together with Alonzo Church. Which gets us back to Lambda and the point of the posting...

Here is the bit that I was referring to (same link as above): "Some computational models are more efficient, in terms of computation time and memory, for different tasks." This is true for the various computer languages as well as for binary computers and quantum computers. As such different programming paradigms have value that they contribute.

Be serious

Your evidence for the significance of Wolfram's contribution to this issue is from wolfram.com? You'll forgive me if I don't immediately add Wolfram to my mental list next to Church and Turing.

It's not relevant if you accept his contribution

It's not relevant if you accept that Wolfram made a contribution or not. It's clear that he's extended the concepts of computational equivalence just by reading his work.

Now to say that you disagree with his theory is another matter. However, given the simplicity of what he is saying it's hard to see how it's not correct.

Whether the links where to his site or not is entirely irrelevant to the quality and veracity of his life. His book A New Kind of Science is half foot notes and citations of others work - citation heaven. It's clear that he stands upon the shoulders of giants who came before him. Clearly he made an extraordinary effort to cite others works.

If you ignore his work just because a link was made to his web site would indicate that you are not a scientist and that you are biased by your simplistic rules of citations.

How about you form an opinion of ones work after reading it?

Let there be a translation

Computation is therefore simply a question of translating inputs and outputs from one system to another.

Aye, there's the rub.

Anyone who has studied computation seriously knows that it doesn't really take much to find a complete computational system.

Where all the work goes is in developing ingenious encodings of computations you care about into and out of that system. Which sequence of Ss and Ks will output the right encoding of the square root of 2 to 10 decimal places?

To make a contribution as significant to the study of computation as Church or Turing (or Curry or Kleene...) would require showing useful encodings of useful computations in some fundamentally new way that added new illumination to how computation works.

I'll leave it to your own judgment whether Wolfram fulfills that criterion.

There is more than one way to contribute

I think if you read his work A New Kind of Science you'll find that he has made a significant contribution to computing science and science in general with is work.

While it might or might not fit your narrow definition of what kind of contribution would be required it's still a significant contribution that stands on it's own and on par with any of the giants who's shoulders support Wolframs feet.

What about your contribution?

you'll find that he has made a significant contribution to computing science and science in general with is work.

Though my reading backlog is always huge, I may someday get to NKS. However, writing a book does not constitute "a significant contribution to computing science and science in general".

By the usual definition of such things, "a significant contribution to computing science and science in general" would require that:

  • There were significantly new ideas presented.
  • These ideas allow practitioners in the field to "do" something new
  • These ideas are widely influential among practitioners in the field.

I'm unaware of any of these holding true, and just suggesting that I read a book without explaining informally how any of these might be true is unlikely to convince me otherwise.

Even convincing me that you found a new idea or practice that you couldn't have gotten from a pre-existing source would go a lot further than just telling me to "read the Bible".

My contribution? Why would you need to know that?

Obviously it's what is written in the book that is important not just that a book was written. In the case of A New Kind of Science it's quite ground breaking even if it's not yet been fully recognized. You seem to confuse social recognition with actual science - this is a common phenomenon. "Social recognition" over substance.

There were significantly new ideas presented.
With regards to NKS new and significant ideas were presented.

These ideas allow practitioners in the field to "do" something new.
Most certainly new things can be done with the work in NKS. After all it's a new kind of science that was invented.

These ideas are widely influential among practitioners in the field.
Now you're into the "social recognition" and "social proof" arguments which are irrelevant to the value of a work. To determine this for Wolfram you'd need to find out how much he's cited I guess would be one approach. It's too nebulous a criterion to be of any value.

I'm unaware of any of these holding true ...
Well you've now been informed that at least two of those items are true.

... just suggesting that I read a book without explaining informally how any of these might be true is unlikely to convince me otherwise.
I did provide links to summaries. Google also has many articles - pro and con - regarding Wolfram's work if you're not inclined to read a book based upon a recommendation with someone you are talking with.

Certainly if one were only to read the "popular" books one would be well read but one would also miss out on a huge range of valuable works.

Even convincing me that you found a new idea or practice that you couldn't have gotten from a pre-existing source would go a lot further than just telling me to "read the Bible".

I think that NKS is a ground breaking book that has implications to the discovery of new programming paradigms. That's my primary interest in it. In fact I did get a significant boost for my work from reading NKS. It's just that wasn't a relevant or appropriate comment at the time that I wrote the first comment. This is what I got from reading NKS:

Twenty five years ago I created a way of computing based upon a fusion of cellular automata and two dimensional pattern matching that shipped to hundreds of thousands of end users in a commercially successful product. One of the objects identified in Wolfram's book is similar to what I invented but not that close. What's useful is that Wolfram has begun to create a systematic inventory and taxonomy of these interesting and useful objects. Certainly the one that I shipped to end users was useful and had another very power characteristic that it would always produce a correct working solution that guaranteed a path through a complex maze without fail (assuming the computer didn't crash of course). So yes, Wolfram's work has had an impact upon this researcher and developer of real products. He's enabled me to see the wider universe of the work that I did twenty five years ago. His work inspired me to take a new fresh look at my earlier work. This has turned up new possibilities that the algorithm at the heart of my work can be extended from two dimensions into three - that was an easy leap. More significant is the possibility that the algorithm can be applied to any number of dimensions including spaces and structures that are variable dimensional in nature (i.e. they exhibit properties of different dimensions in different places); in other words it will work in any arbitrary graph. I am also very interested in systems that can compute while guaranteeing results upon an uninterrupted completion - as it could mean lower counts of software bugs. This is ongoing research and development - most of which I'm reluctant to disclose at this time. Wolfram's work has been an essential inspiration even more than the game of life ever was since his approach is based upon a systematic and rigorous methodology - after all it's science bought to computing.

I wouldn't tell you to just "read the bible" and NKS most certainly isn't a bible. It's simply a ground breaking book in it's field - even if it's the book creating or vastly extending the field or that creates a new sub-field within the existing field.

You obviously are a very tough sell Marc - not that I'm selling you anything, it's just an expression. I suppose I'm a bit easier and open to new concepts which would explain the extensively massive technical book collection that I have that fills two rooms - certainly your approach is cheaper on the budget. I'm not here to convince you, I was simply offering a suggestion that you may wish to read it from the source if my explanation (written with limited time) is inadequate to communicate the notions that took Wolfram twenty or more years to refine.

So, my advice to you is to read the book. ;--)

PCE != C-T Thesis

I welcome correction on this point, but it seems that Wolfram is quoted mostly by himself and his toads ("employees") -- not researchers at academic institutions. I doubt his "contribution" is much used by theoretical computer scientists in any academic institution, anywhere.

The "Principle of Computational Equivalence" is wholly Wolfram's idea -- what does Turing have to do with it? This is not the same thing as the "Church-Turing Hypothesis" -- it's not a statement about what's effectively computable. Rather, it is a vague assertion about complexity -- the PCE states that anything that's a little complex is complex enough to be a Universal Machine.

PCE builds upon C-T Thesis plus others works

Your comments on who quotes Wolfram and who uses his work are entirely irrelevant to the value of his work. It's not a popularity contest, it's computer science.

Yes, the Principle of Computational Equivalence" (PCE) does seem to be Wolfram's idea. Turing's thesis was on the equivalence of various computing machines and methods and in this regard Wolfram's PCE is related to the Church-Turing thesis. If you read "A New Kind of Science" (NKS) you'll see how it's related, how it builds on, and how it differs.

value judgements

Yes, the Principle of Computational Equivalence" (PCE) does seem to be Wolfram's idea.

Why attribute it to Turing, then?

Core Algorithmic Notions

There are two different concepts that I was attempting to convey - obviously it seems like you didn't get the meaning that I intended.
Obviously the "Turing, Wolfram" references I gave didn't resonate with a number of people who read this weblog, which is why I've attempted to address their and your questions.

The two different concepts are the following.

The first being the Church-Turing-Stevens notion of the equivalence between various computational systems.

The second being the notion that Wolfram proved that very simple systems can be as complex as any complex system plus his extended notions of complex equivalence.

It's clear that there are many possible programming paradigms out there waiting to be invented or discovered (manually or auto-magically) and that they are computationally equivalent even though one might not want to program in them for practical reasons. This principle applies to existing languages too: for example while I can easily program in Java and Perl I attempt to choose better languages such as Smalltalk for as many jobs as possible when it's my choice. When Smalltalk can't do something the way that I want I customize it or find a way to enhance the ZokuScript programming language and system that I'm working on so that it can do what is needed.

So yes, there are many abstract yet usable computational systems that have been written up about that I'd not use. And there are some that are interesting, such as some of the properties of functional languages (currying).

If one of the foundation stones of functional systems is currying, as seems to be the case, then it could be said that at the core of functional programming languages is the currying algorithm.

Essentially at the core of each programming language is an algorithmic notion or notions.

Another example is Smalltalk which has at it's core a straightforward message passing algorithm where objects receiving a message are checked to see if the object's class or super classes have a message with the same method selector (e.g. #curry: or #value:).

Invariably there is also a data structure involved with the algorithm.

The work being done on the Cola languages by Ian Piumarta is an excellent example of the dynamics of the relationship between algorithm and data structures as it applies to programming languages.

In much the same way cellular automata, such as the one that I invented twenty five years ago, can be combined with other paradigms or algorithms to form a unique algorithmic methodology that is a distinct paradigm in it's own right. Distinct enough to form the basis of a programming language. To me that is very interesting.

Furthermore the notion that it's possible to have algorithms that guarantee a solution assuming that the transaction the algorithm is involved in completes is a very potent and very important notion for the future of computing. The experience that I have to back this is up was a commercially successful product.

Additional links on Computational Equivalence

A New Kind of Science: Chapter 12: The Principle of Computational Equivalence.

"During the 1930s, Church made Princeton a leading center of research in mathematical logic, with a focus on questions of the completeness and decidability of logical systems. In 1936 he demonstrated the undecidability of first-order logic ("Church's Theorem"), thus extending the famous result of Kurt Gödel, who was visiting the Institute for Advanced Study at the time. Together with his early students, J. Barkley Rosser, Steven C. Kleene, and Alan M. Turing, Church established the equivalence of the lambda calculus, recursive function theory, and Turing machines as formalizations of the notion of "effective calculability," a result that has come to be known as the "Church-Turing Thesis." In the 1950s and 1960s another generation of Church's students, including Michael Rabin, Hartley Rodgers, and Dana Scott, extended this research to automata, formal languages, and formal semantics, thus shaping the new field of theoretical computer science. Through this work--the lambda calculus--one of Church's earliest creations, gained new life as the basis for functional programming languages and for denotational semantics."
Church Bio"

It's interesting to note the number of people who assisted Church, and how these ideas come full circle, or is that full braid?

What is very cool about Vassili Bykov's work is how simply it is to add currying to Smalltalk. It's just a handful of methods and depending on the implementation one sub class of the Block Context (Smalltalk's Lambda). Very cool indeed.

Many more programming paradigms waiting to be discovered

Since Wolfram has conclusively demonstrated that even simple systems can demonstrate behavior as complex as any complex system (see chapter 2 of A New Kind of Science) and that many natural systems are computing systems equivalent to universal turing machines it's clear that many approaches to computing are possible. Many more than we know know about now.

If one were to take the systematic approach to finding viable computing approaches as Wolfram took with his search through simple systems we might find other useful computing paradigms. We might even find useful unified paradigms if we were to search the space of possibilities near existing languages such as Erlang and Smalltalk for instance. Actually such searches are underway (see Squeak news list on fusion of Erlang and Smalltalk notions). However, a methodical and systematic search using an automated approach may turn up results that humans wouldn't directly find manually.

In any event, it's clear that different programming paradigms have benefits and sometimes a fusion makes sense, other times it won't though. It's also clear that it may not take much to have useful and novel computing paradigms based upon simple systems.

It is very interesting that currying was so easy to add to Smalltalk. This is the power of Lambda as implemented in Smalltalk. Obviously currying is a simple system that can generate complex behaviors. There are others out there. What are they?

To boldly go... where?

However, a methodical and systematic search using an automated approach may turn up results that humans wouldn't directly find manually.

Some of these ideas go back to (at least) Genetic Programming.

There is a fundamental problem with this idea: how can you pattern match something whose structure you have never seen before?

We have enough difficulty defining a known paradigm (e.g. this thread) let alone automatically recognizing a paradigm that is both new and useful.

Seriously, the old fashioned step-by-step human-directed-research approach has much better chances of succeeding.

Luditeism belongs in the past along with fallen civilizations.

Yes, NKS has pieces that stand upon Genetic Programming. In a way NKS borrows the systematic search aspect of Genetic Programming to carry out extensive surveys of the spaces of possible simple systems.

"There is a fundamental problem with this idea: how can you pattern match something whose structure you have never seen before?"

That is just what Wolfram accomplished in his research that he documents in A New Kind of Science (NKS).

It's an excellent question worth pursuing. And no, it won't put you out of a job with manual computing work. ;--)

"Seriously, the old fashioned step-by-step human-directed-research approach has much better chances of succeeding."

That kind of thinking is likely why Wolfram's work, in NKS and elsewhere, is so ground breaking. With so few people working in the area it was ripe for automated picking by Wolfram and his tool, Mathematica.

Luditeism belongs in the past along with fallen civilizations.

Wolfram book was discussed

Wolfram's book was discussed here in the past, and I don't feel that the current discusison is adding to what was previously said here.

The type of expertise required to evaluate Wolfram's claims in NKS is not the type of experitse usually related to the study of programming languages, which is why a substantive discussion of the book is beyond the scope of LtU.

The book was discussed at length by experts, and I suggest to those interested in the book to look up these discussions.

Object Oriented Cellular Lambda's

I take exception to the notion that aspects of KNS are not related to the study of programming languages" or that it's relevant contributions are beyond the scope of the current discussions. I believe that aspects of NKS are directly relevant in many ways, some of which I've already stated. Certainly NKS goes far beyond the scope of programming languages however there is a great deal at the intersection of the two concepts. It would be a shame to turn one's back on it, or even worse to stifle speech on that topic.

Certainly a substantive discussion of the aspects of NKS which relate to programming languages are within the scope of a site dedicated to programming languages?

It's clear to this expert that that is the case. I don't need to evaluate all of Wolfram's claims in depth to see the value of some of them in my field of expertise, computer systems, programming and languages. The fact that simple systems can form complex behaviour is very interesting from a research and a development perspective as it broadens and deepens the space in which we can search for new language features to include almost all, but not all, simple systems.

As I pointed out in my comments NKS is directly related to programming systems. The research that I'm conducting in a next generation Smalltalk like language, ZokuScript, relies in part on the work of NKS and others in the field of cellular automata. Merging Lambda and cellular automata with objects is an important step forward. In many ways objects come from cellular automata, and as both are particle systems they share many interesting properties. How functional properties fit in the universe of particle systems is substantial and important.

What is of particular interest is how currying can apply in the cellular automata based search algorithms that I alluded to in an prior comment. It may well be that certain algorithms are best executed in particular languages or at least with features of a particular language such as currying, message passing or the use of cells in a grid to do work. It may be that currying is excellent for use in pattern matching algorithms especially considering how well it fits with parallelization, a topic of great interest these days. Further it is of interest that it was so easy to add currying to Smalltalk atop it's implementation of Lambda - block closures.

In designing a new language that brings the best from different programming paradigms it's now clear that the nature of Smalltalk's lamda is sufficient to implement currying in a particle system, i.e. cellular automata system. I'd not have seen that without the forgoing discussion.

How many other features of functional languages can be added to imperative languages such as Smalltalk? That is also of great interest and relevance to this reader and poster.

A New Kind of Thread

A substantive discussion of the aspects of NKS which relate to programming languages may well be in scope for this site. But it's not really on topic for this particular thread.

Agreed. That said, I'd be

Agreed. That said, I'd be curious to hear what cellular automata offer that things like process calculi don't - I can't help thinking that CAs:process calculi::combinator calculi:lambda calculus, which leaves me vaguely curious as to whether there are any interesting bases but rather sure we'll run into them without playing with CAs.

Working From Simplicity

The fact that there are such simple constructs that are universal is indeed very interesting. However, the same observation can be made about, e.g. the SK combinator calculus. It's ridiculously simple, and proven to be universal. But no one in their right minds thinks of programming with the SK combinator calculus. It's too simple. Human judgment will continue to be realistically required to judge which universal constructs are worth embedding in the form of programming languages and which ones are not.

Indeed this is nothing true.

Indeed this is nothing new. The same can be said of Turing machines, and even LC (beta reduction is quite simple). The latter observation, being often mentioned as the core of functional programming, is particularly apt for this thread.

Indeed

I'd have to concur with your assessment.

The same can be said of many existing programming languages. ;--)

Re: Wolfram

I expressed an opinion on this here.

Look at the ancestors of the post for context.

(I see now, below, that Paul Snively had a similar reaction.)

Update: Oops, I already linked this post elsewhere on LtU. Sorry for the repeat.

Style

In a big part, I'd say the difference is in the common style. The divide is mostly felt along the functional-OOP line. On one hand you've got objects which are opaque and does things on their own, messaging each other, etc. On the other, you've got some data structure around which some referentially transparent transforms act -- never modifying data, just making new copies. In essence, whether your data has the ability to boss itself around. Most large programs need both, to be able to organise things sensibly. Erlang pulls this off superbly, where processes are basically objects, sending each other messages, influencing each other's state, but being functional inside each process. The question arises if you could turn that around: build functional-style programs, focused around data structures, with OOP building blocks. This avenue seems much less well-explored, even though things like Scala are making some headway, though I'd say there's not yet a definitive Scale-style.

I think four features that

I think four features that is typical of functional languages is.

1) Firstclass functions
2) Tail Recursion
3) Recursive datatypes
4) Imutable datatypes per default

How many is needed is up to debate.

list comprehensions?

I think the above four are necessary -- though I'm not sure how (3) translates in a dynamically typed language -- but my functional experience would be much impaired without list comprehensions.

I dont see how the typing

I dont see how the typing affects (3) at all. A list in lisp is clearly a recursive datatype, it has a base case nil and a inductive construction cons.

Probably talking about immutability

I think Jason was talking about property (4) - default immutability. In Scheme, for instance, symbols can be rebound arbitrarily. It's just that the Scheme community has a history of trying to avoid it as much as possible.

en4cement

It's true that (4) raises a question of its own for dynamic languages -- enforcement.

Whats the question? I dont

Whats the question? I dont see how for example Erlang, Oz and Ocaml differ on this topic.

Um, the question was..

Scheme is the question. Or should I say Scheme is the exception that calls the rule into question. In Scheme you can mutate any variable at any time for any reason. There's no such thing as an immutable variable in Scheme - just variables that are never mutated.

Yes sure, Scheme does not

Yes sure, Scheme does not conform to (4). I just don't think that all the other (two?) dynamic functional languages should be burdend by that. Scheme, as most lisp dialects, have extensive imperative features, but thats just part of the design. It doesn't have anything to do with the dynamic typing.

Not in R6RS

This is no longer true in R6RS - variables can only be mutated from within their lexically enclosing module.

Selective Enforcement

Is O'Caml really dynamic? I had no idea. I thought it was statically typed, type-inferring like Haskell.

By "enforcement" I meant "selective enforcement". When you have optional mutability in the language -- but you want to say no mutability for these things, please. In Erlang, there is no mutability, so there's no problem.

No Ocaml isnt especially

No Ocaml isnt especially dynamic, I meant to use Ocaml as a comparsion, with the intent that the three languages is just as functional and that these points are completly orthogonal to typing conserns. Both Oz and Ocaml have imutable datatypes per default but you also have acces to mutable cells. And mutable datatypes is macro expressible in Erlang (and rahter easily to).

Erlang

In Erlang, there is no mutability

That statement is made often but it's not true. Erlang has lexically scoped immutable variables and "process" scoped modifiable variables. The syntax for accessing them is totally different, but they're still variables.

process scoping

What is a "process scoped modifiable variable"? You mean the process with a given name can change from time to time?

Each Erlang process has a

Each Erlang process has a dictionary that can be mutated.

side-effects in Erlang

The mutable data structures in Erlang - ets and the process dictionary - can both be modeled on top of processes and message passing, so in a sense, they are optimizations only. Another way to see this, is that you can model just about any mutation operations on top of processes and message passing, much like you can perform just about any kind of dirtiness with Haskell monads.

I wonder if anybody ever noticed that before?

much like you can perform just about any kind of dirtiness with Haskell monads.

If we allow Church-Turing equivalence into our discussion of categorizing languages then we end up with an inability to distinguish languages in any meaningful way except those that are universal and those that aren't.

only optimizations

They may be optimizations, but they are still there...ets and the process dictionary can be modeled with processes and message passing, but they aren't; and mutating operations can be modeled within Haskell's type system using monads, and they are.

I'm not sure what a dynamic language would do to offer comparable performance without the optimization you describe -- maybe only through JITs?

(3) and typing

Well, you can't define types in (any?) dynamically typed languages. You can define classes, but they are a different kettle of fish -- generally, they sit on top of the real type system.

Thats true, I was talking

Thats true, I was talking more about a language having recursive datatypes, than letting you define new recursive datatypes. Although thats a realy usefull feature to, I wouldn't consider it essential to functional programing.

Some reading suggestions

Here are two papers that I really like.

The first is a paper by Felleisen which talks about the relative expressiveness of programming languages. I like to say that a language is "functional" if it can macro-express the terms of the simply typed lambda calculus (where the term macro-express is defined in the paper). More or less, this means that there is a compositional semantics-preserving mapping from STLC into the target language. (EDIT: this isn't my idea, and I don't see it in this paper; I can't remember where I read it.)

The second paper is by Amr Sabry and discusses what it means for a functional language to be pure. He proposes a specific criterion and discusses why he feels it is superior to some other commonly advanced ones.

On the Expressive Power of Programming Languages
What is a Purely Functional Language?

Not sure I follow

I'm not sure I understand your requirement re: STLC. Can't STLC be "macro expressed" in Java with a construct like "interface Lambda<A,B> {public B apply(A x)}" and anonymous classes? Oh, it would be ugly as sin, but it's doable.

Yet, even though I strongly believe in the link between functional and OO I can't bring myself to flat say Java is functional.

Of course I might be misunderstanding something about what you're saying or even about what the paper referenced was saying.

Anyway, one interesting conclusion of that paper is that "expressiveness" does not necessarily mean "better" for many of the common definitions of better. One conclusion for instance is that an impure language is more expressive than a pure cousin, yet many on LTU (myself included) are fans of purity. If you look in the Algol family, a language with a global "goto" would be more expressive than a more structured cousin without, yet you'll be hard pressed to find a goto fan these days except among the bare-metal surfers.

I think so

I think the answer to your question about Java is "yes." According to this definition Java would be considered a functional language (of course, making this a formal statement that could be proved would require firming up the definition some). This clashes somewhat with my intuition, as it did yours. I think some such clashes are a inevitable whenever we try to invent a formal definition for what is essentially a concept in natural language. The question that naturally arises is "is my intuition wrong or have I defined the wrong concept?"

I'm not completely wedded to this idea, but I think it's a step in the right direction. Perhaps macro-embedding the STLC is a necessary but insufficient condition for the natural-language concept "functional language." I don't know offhand of a criterion I'd add to this one to make it better, though.

Hats Off

My hat is off to you for coming up with a formal definition and sticking with it wherever it might lead. Out of curiosity I built the S and K combinators based on that idea then implemented the I combinator in terms of them. It was as hideous as you might imagine. As you say, that's still not a formal proof but even the little I did caused my eyes to bleed. Whoever does a formal proof (or disproof) of STLC->Java will need a perverse nature and far more determination than I can muster.

Pattern matching in dynamically typed languages

For example JavaScript and Ruby are dynamically-typed. So things like pattern-matching are meaningless in their scope (or have a very different meaning)

Pattern matching works very well in dynamically typed languages, see eg. Prolog (has unification, which is more powerful) or Erlang, or various metaprogramming libraries for Ruby.

The obvious difference is that statically typed languages can guarantee eg. disjointness of clauses and exhaustive matching (without failure) statically, while dynamically typed languages do it dynamically.

Weakly typed languages, on the other hand, have more problems. You can interpret a value as multiple types, violating the disjointness of the patterns, meaning you have to order the types or the clauses somehow to get a single valid result. (you have to do that in statically/strongly typed languages too, just not as often)

the original question

There's no such thing as a "functional programming language" except as a colloquial expression that means "a language liked by people who write in functional style". But, more useful things can be said:

Programming in "functional style" means adopting a style in which:

  • every program adds a new "abstraction" to the programming environment
  • abstractions (programs) can be composed, in some uniform way, to form new programs
  • the composition of abstractions to form new abstractions is the only way (in functional style) to create new programs

It almost doesn't matter what constitutes an "abstraction". What matters is that there is a simple, uniform, universal way to compose programs, obtaining new programs which may themselves be further composed with others.

The opposite of functional style is "destructive style". In desctructive style:

  • A small set of "noop" programs exists -- the simplest programs. (e.g., "int main() { return 0; }")
  • New programs are formed from old programs by destruction: code is added, deleted, and modified in such a way that abstractions formerly implemented are removed, and new abstractions substituted. Thus, there is no uniform way to compose two programs.
  • Programming (in destructive style) is not cummulative: every program is its own "project".

So, is "C" functional (standing for the "static x weak" box)? It's a mix and it depends how you use it. In conventional C, flow of control is almost always expressed in functional style. In conventional C, the state of variables and the heap is almost always expressed in destructive style. C is a functional programming language for you if you adopt a style that tames variables and the heap.

Note that neither C's static typing or weak typing is relevant to whether or not C is a functional language: it's all in how it is used. Since people's *conventions* for using the C heap don't lend to composable abstractions, C is often rightly thought of as a destructive-style language. Those conventions aren't determined, though, by C's typing system.

So, in conclusion:

There's no such thing as a functional programming language and typing (weak/strong, dynamic/static) is largely irrelevant to functional style.

-t

functional style and an economic law

K functional-style programs can be composed 2^K ways in a "single step". So, I have a lot of choices.

But, in a single step, starting from K destructive-style programs, there are only K more destructive-style programs I can create.

So, even a modest functional-style approach wins big in cases where "choice" in next-steps of coding has value.

By this reasoning, the reason we haven't seen a popular production OS in functional style yet is because customers don't want their OS to evolve very rapidly anyway, so "choice" about how to develop the OS in the future isn't considered important. (As a correlary, to "win big," a functional style OS project would have not not just potentiate choice, but then make some good choices and snag irresistable new functionality faster than destructive-style OSs could keep up.)

-t

There's no such thing as a

There's no such thing as a functional programming language and typing (weak/strong, dynamic/static) is largely irrelevant to functional style.

I fully agree with the latter half (except that I would even delete "largely"), but the former is nonsense. Since you mentioned C, that doesn't have anything close to first-class functions, on which functional style heavily realies. So there certainly is a difference, beyond mere syntax and mode of use.

A good litmus test for a language being functional is whether you can write a combinator library in it without having to jump through enormous hoops, and actually get a result that would be of real practical utility.

A good litmus test for a

A good litmus test for a language being functional is whether you can write a combinator library in it without having to jump through enormous hoops, and actually get a result that would be of real practical utility.

This would exclude first-order functional languages.

First-order functional languages

Which wasn't all that unintended. To me, "first-order functional language" is a contradiction in terms. Functional without first-class functions? IMHO, "pure" would be a more accurate description here.

Fooled by the language?

That's brushing it under a piece of shorthand though - pure is short for "purely functional".

How would one classify a

How would one classify a pure, first-order language with algebraic data types according to a programming paradigm? Purely functional seems to fit, but I understand your emphasis that higher-order functions are quite important to useful functional programming.

Two meanings

"Functional" comes with at least two meanings. The first (and I think the more common) says that functions might or might not be pure but they are "first class." While there's room to argue a bit about what first class means, it's clear that first order functional languages don't meet this definition but that ML and Scheme do.

The second is the "pure" sense where functions behave like they do in math - "side effects" aren't allowed. In other words, this definition contrasts true functions from the procedures that are found in many languages. By this definition first order functional languages are functional but Scheme and ML are not.

Is that a contradiction? I don't think so - it's just a result of using natural language which allows the same phrase to mean different things. So it's silly to argue that X is and Y isn't or vice versa without first picking a definition and making it clear which one you're using. In this case there's more than adequate precedent for both usages so arguing which one is more correct is just a giant rat hole.

Having said all that, I'll admit that I agree with Andreas and personally prefer to use "functional" to mean the first definition and "pure" to mean the second. Still, I have to always be aware that my preference may or may not jibe with how others use the words: Joy compared to other functional languages.

Lenin?

Please, may I ask, what does "Lenin-Typed" mean?

I have noticed that the business model of the authorship, maintenance, distribution, and use of the Linux kernel does seem to conform to "from each according to her ability, to each according to her need".

dunno, tho i've heard of

Lenin & Stalin

Thanks; that was Linear typing (very close to uniqueness typing with more restrictions); my mistake; corrected.

Uniqueness typing (I saw 1st time in Clean) to me, (IMHO) is a more approachable way (for an imperative mind) to tame side-effects; than Monads. This is not scientific; but rather empirical (and from a pragmatic point of view can be true).

please see the relevant parts of the DDC thesis

i think somewhere in there it talks about how the ascii goes bad. i think the same thing happens in ATS, if i understand correctly.