## General purpose as a special case?

In another forum, I attempted to start a discussion of best languages for learning new paradigms. The idea was to identify languages that would lead a programmer to do new things, or at least learn new concepts that could be applied to working in the programmer's everyday language.

In preparation, I made a list of every paradigm I could find mention of. Then I pruned the list by asking, "Would using or not using this concept require changing how a problem is modeled?" For the remaining paradigms, I searched for the best language, with the requirement that any implementation not be just an academic toy and not be used only in special-purpose tools.

I was surprised to end up with a very short list. It could be I just did a bad job of setting criteria. But perhaps it is the case that in the space of possible paradigms, ones flexible enough for general-purpose programming inhabit a relatively tiny range?

## Comment viewing options

### I'll try to make this more focused

Consider the case where a programmer wants to expand his perspectives by learning a new language. Why would Befunge not be a suitable answer? It is definitely different, has an active user community, and there are multiple IDEs, debuggers, and code repositories.

I think there is an assumption that any specific language to be learned has to be applicable to some range of "real work", and that the same applies to programming paradigms.

So, I'm asking for speculation on what makes a programming paradigm suited to general purpose programming. Can we make a general theory for why logic programming flopped, object-orientation succeeded, and dataflow programming is invisible except in specific tools where it dominates?

### Where angels fear to tread...

Can we make a general theory for why logic programming flopped, object-orientation succeeded, and dataflow programming is invisible except in specific tools where it dominates?

Of course we can. But please define first what you mean by "flopped", "succeeded", and "invisible". Define also what you mean by "logic programming" (do you mean Prolog, constraint programming, or something else?) and "object-orientation" (do you mean data abstraction, inheritance, polymorphism, some combination thereof, or something else entirely?).

What I mean to say is that your question is so vague as to be completely meaningless.

### Design!

So, I'm asking for speculation on what makes a programming paradigm suited to general purpose programming. Can we make a general theory for why logic programming flopped, object-orientation succeeded, and dataflow programming is invisible except in specific tools where it dominates?

You seek answers to non-scientific design-style questions, what would be referred to as "wicked problems" in other fields. Accordingly, they are not amenable to enlightenment via general theories.

Related: critical programming language design.

### Wicked problems

"Wicked problem" seems to be a term mainly used in sociology. In cognitive science and artificial intelligence, it's "ill-structured problem". I see there is no Wikipedia article on ISPs and the one on wicked problems doesn't even mention ISPs or Herbert Simon, who wrote the classic paper, "The structure of ill-structured problems", Artificial Intelligence 4:181-201, 1973.

I agree that determining what makes a successful paradigm is an ill-structured problem. So, we need to approach it as one does other ISPs, by satisficing.

### But what knowledge are you

But what knowledge are you trying to acquire? Questions like this inspire impassioned debate without any hope of being answered in any conclusive, satisfying way.

### Consider the set of

Consider the set of paradigms identified on PvR's chart. I think that, with a modest amount of disagreement, we could place place each paradigm in one of three subsets.

1. Ones that have become widely used as primary paradigms for general purpose programming.
2. Ones that are used only as auxiliary paradigms in general purpose programming.
3. Ones that are used as primary paradigms only in a narrow range of applications.

I expect that subsets 2 and 3 will have some overlap.

Question: Can we make any interesting generalizations about the paradigms in subset 1?

### Yes, but subject to

Yes, but subject to historical bias. As an example, I've found it interesting to look at properties that are true *irrespective* of being programming languages. E.g., restrictions (in 'democratic' systems) become more lax over time. As languages mature, we tend to see the addition of features. Language revisions that reject programs that used to compile are generally frowned upon, as are wholecloth replacement languages that 'do less'. It'd be curious to see such trends break down for particular types of languages, such as your subsetting. E.g., do DSLs follow their own path, where productivity is so subjective yet important?

Note: I *haven't* verified the claim about programming languages. I've only read about the phenomena in repeated studies about law reform, school policy, etc..

### Avoid pain

If a designer wants a language to achieve widespread adoption then they must avoid pain for the programmer. Making the language complete so that it can be used to write any application is only the first step. This is generally the step where a language is specialised for some set of applications that the the designer has in their head, making the types of jobs associated with writing those applications as easy as possible. This corresponds to the list that you have behind the link.

Specialising a language for any purpose is always a trade-off, making one thing easier to express will make another thing harder. By picking certain certain activities to promote within a language the designer is making implicit choices about which things will become harder. I'll use Prolog as an example because you mentioned its failure to achieve widespread adoption: processing strings is painful in Prolog. Backtracking search and unification are a great fit for some applications, but they only supply difficult ways to describe string processing (once you get beyond append/3 obviously). String processing is such a vital part of "general purpose" programming that it kills the adoption of Prolog as soon as you leave toy applications and try and build something real with it.

Contrast the adoption of Prolog with the adoption of Perl: a language that is pretty bad at everything but surprisingly easy to write string processing code with. The set of painful problems it has are small / obscure enough that it reached critical mass with developers.

### Prolog was designed to be an

Prolog was designed to be an influential language that explored (pioneered) the space of logic programming. It wasn't designed to be a usable language! Prolog was incredibly influential in the field even if its not used very widely, I would totally be happy with a success like that.

Sometimes we just don't design for general use; especially if we are trying to increase humanity's body of knowledge through exploration. But sometimes we lose track of this goal and try to go further (e.g., marketing Prolog for general purpose programming), we always fail spectacularly but even these failures are useful for gaining knowledge (investors might not be happy though).

Languages designed for use are "safe" and boring. They get the job done, but we don't learn much from them.

### CSP

Two of the most important lessons we learned from Prolog have to be:
1. Don't do it like that
2. No seriously, it's still not a good idea.

I probably sound like I hate Prolog, although actually I have quite a soft spot for it so I'll expand those points a little bit. It was exploring the space of what we can do with unification and backtracking search that made people realise how / why / when constraint programming is necessary. This has lead to a range of interesting languages (Mercury and Oz for example) and tools to support constraints in non-logical languages. The use of Prolog as a hosting language for exploring other language designs through meta-interpreters is a very powerful technique. It works well as an exploration tool, but becomes too messy to remain manageable as the scale of the applications inside the embedded language increases.

This is actually a point missing off of the original list, and it is one that a language designer would value more highly than a programmer: ease of embedding DSLs. It touches on some of the points listed under lisp and some of those under Prolog. While I'm listing holes, some kind of shell / scripting language like BASH should make an appearance on such a list. The strengths there are almost a complementary set: composing scheduling / control / dataflow between tools in alien languages.

### Against Prolog: more meat please

Could you be a bit more precise in how we learned from Prolog that it shouldn't be done that way? The rest of your post is interesting but doesn't develop that point at all.

I'm also under the impression that you conflate "unification and backtracking" (that is certainly logic programming) and "constraint programming", which I consider to be different (there is a notion of constraint domain, while unification and backtracking are domain-agnostic), even if most logic languages also support constraint programming and this is in some sense always about search.

My own gripe about prolog is that it is not declarative at all; when actually programming in Prolog you are constantly thinking about the operational semantics, more than in almost any other language I know, except maybe assembly languages.

Also, the handling of negation is a mess -- but, as Girard flowerily puts it, there might be no reasonable handling of negation in this branch of interpretations of logic.

Maybe this is not the right place to discuss prolog more deeply; feel free to create another topic for this. But you just said too much or too little.

### Logic vs. CP

I think he was referring to the need to shift from logic programming to constraint programming when the search space becomes large. Of course, there is "constraint logic programming", which combines numerical constraints and logical statements. Formulas are divided between those to be treated by logic programming and those to be treated as constraints.

As far as being against Prolog, he was just sticking to the context of this thread, which is general purpose programming.

### Point Development (one unit of)

I veered away from describing how we learned that logic programming is better suited to other techniques because I'm not aware of a good reference to point to, and what follows is mainly personal experience. Prolog sells itself as a language with the claim that it allows declarative programming and bidirectional programs. Neither of these claims is true once you move beyond small-scale exercises. One application that Prolog allows excellent declaration of, is depth-first search tasks. Unfortunately many search problems are horrifically inefficient if they are tackled in this way, and some form of constraint entailment and resolution is required to find a solution. So while Prolog can tackle some search problems directly it requires the addition of a constraint solver to tackle others. I've used ciaopp fairly heavily and it does a very impressive job but is very dependent on the ordering of constraints that are presented in clauses: this is the opposite of a declarative approach.

Arithmetic is one of the "awkward squad" for Prolog as anything that uses is/1 instantly becomes dependent on the operational semantics and the procedural interpretation of the code. As arithmetic is such a fundamental issue within most application domains it means that almost all Prolog code becomes dependent on a uni-directional non-declarative primitive and it tends to leak throughout the code (there is an analogy here with monadic pollution in functional languages).

Given that these limitions have been known for decades I would point to the design of newer languages such as Oz or Mercury as support for the idea that we have explored what Prolog can do and there is room for improvement. Another research direction is expanding the use of constraint solving from a procedure used within resolution to an actual form of resolution itself, as in Progol and related systems.

PS Loved your post below, brilliant source of advantages that DSLs possess over libraries.

### CiaoPP

Re: comment-68507:

I've used ciaopp fairly heavily and it does a very impressive job but is very dependent on the ordering of constraints that are presented in clauses: this is the opposite of a declarative approach.

I've heard of Ciao Prolog but not CiaoPP. From the reference manual:

CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm program development environment. CiaoPP can perform a number of program debugging, analysis, and source-to-source transformation tasks on (Ciao) Prolog programs.

Thanks for mentioning it.

### I don't mean to detract from

I don't mean to detract from folks' philosophical criticisms of logic programming as a flawed paradigm for telling computers what to do, but in considering the success or failure of a language, don't underestimate the power of "superficial" details in the user experience. I'm quite serious. Anyone else here remember MBASIC? I spent many happy hours, back in the day, coaxing MBASIC to do nifty things. Then I didn't use it for many years, until finally in the late 1980s I needed to fire up the old MBASIC interpreter one last time, to decompact some files preparatory to archiving them on a newer machine. And the moment I entered the interpreter, I felt... happy. The pervasive sense that all was right with the world was so thick you could cut it with a knife. I remembered that feeling from a decade before, and it was so strong there had to be an explanation. Then it came to me: other vintage BASICs I'd used had the prompt "READY", which was pleasant enough, but MBASIC's prompt was "Ok". No matter what ghastly, inscrutable error message it may have just given you, its final word was to reassure that despite all that, it was "Ok". Contrast that with Prolog, whose final word was often "no". I'm not saying Prolog would have been a runaway success if its message when failing to solve a query was "Hooray!" — but attitude does matter.

yes!

### therapeutic aspect of a REPL

Re: comment 68492:

The pervasive sense that all was right with the world was so thick you could cut it with a knife. I remembered that feeling from a decade before, and it was so strong there had to be an explanation. Then it came to me: other vintage BASICs I'd used had the prompt â€œREADYâ€, which was pleasant enough, but MBASIC's prompt was â€œOkâ€.

This would also explain why Forth enthusiasts are so enthused about Forth:

\$ gforth
Gforth 0.7.0, Copyright (C) 1995-2008 Free Software Foundation, Inc.
Gforth comes with ABSOLUTELY NO WARRANTY; for details type license'
Type bye' to exit
1â¤¶
1  ok
41 +â¤¶
41 +  ok
.â¤¶
. 42  ok


Re: Prolog:

### DSLs, Dataflow

DSL-making is something I considered but couldn't come to a definite opinion about. Consider languages that are popular for making DSLs (Lisp, Ruby, Haskell,...) versus ones that aren't (C, Pascal,...). What pattern is there? Well, most of the DSL languages are functional or have significant functional features. Is making a DSL anything more than defining high-level functions? Maybe the making of DLSs is just part of the functional paradigm.

My biggest struggle was with the dataflow paradigm. It is distinctive and interesting. It is the basis of many successful software tools: LabVIEW, Lustre, Pure Data, Simulink, VHDL, etc. But I know of only one attempt at moulding general programming to the dataflow model (Lucid), and it didn't get very far.

This relates to the omission of shell scripting. From the point of view of its influence on how you formulate a solution, I see pipes as the interesting feature. This is a "dataflow" mechanism. Does this put shell scripting within the dataflow paradigm? I don't know. Now, if we had Hartmann pipelines in Unix shell, I'd feel more confident.

### Lambda the ultimate binder syntax

Consider languages that are popular for making DSLs (Lisp, Ruby, Haskell,...) versus ones that aren't (C, Pascal,...). What pattern is there? Well, most of the DSL languages are functional or have significant functional features. Is making a DSL anything more than defining high-level functions? Maybe the making of DLSs is just part of the functional paradigm.

My claim on this question is that you need anonymous functions (ie. lambda-abstractions) to represent binders in your DSL. Lambdas (or Ruby's blocks) are, syntactically, a generic binder construction. Language without easy access to such a core construction make expressing DSL with binding constructs very difficult.

For a concrete example, if you are modelling a library / sublangage whose domain is logic, logical quantifiers can be represented as higher-order functions taking a lambda, âˆ€ x âˆˆ A, P(x) â€Œ/\ Q being represented as DSL.forall A (fun x -> DL.and (P x) Q).

Lambdas are syntactically the universal binder syntax. In most programming language the analogy does not stand semantically : anonymous function have a more complex operational semantics (eg. closure allocation etc.) that is unnecessary for DSL binders; this has various inconvenients, both practical (eg. performance issues with systematic allocation) and theoretical (function arrows have negative polarity, which makes HOAS encoding difficult in theorem provers). I have discussed this semantic mismatch in an older comment about syntax extensibility.

Syntax extensibility is related to your question about DSL. Ruby and Haskell provide no way to tweak the surface syntax, they make DSL possible by having a syntax concise and flexible enough to be able to express DSL constructs in the core syntax with a low syntactic noise. Lisp/Schemes tend to favor syntactic extensibility to remove that noise (eg. around lambda, which is a not-concise construct in those languages compared to Ruby or Haskell). Haskell provides the do-notation as a kind of controlled "syntax extension" point. I make the uninformed bet that DSL writers increasingly prefer the first technique (no surface syntax, but reusing/hacking the host language flexibilities), and that it is because of its lower cost. DSLs are more "Domain Specific Libraries". But the syntax-extension crowd still have their word to say, because current languages are generally not quite flexible enough on the edges: while the general syntax may be acceptable for those DSLs, the general error reports are generally not good/readable enough. You want your library to define its own error reports, or at least communicate with some flexible error reporting mechanism; you may also want your DSL to have special optimization passes â€“ if only to try to remove the inefficiencies that you accepted in search of a natural surface syntax.

This is not a new or original topic by any respect and there is a fascinating litterature on LtU.

### Language Binders

Indeed lambda abstraction as variable binder is different from closures. If we take the spaghetti stack approach to closure, then a variable binder occurs before the calculation of a closure and thus does not need a spaghetti stack reference. It simple serves constructing a lambda abstraction as part of some construct that needs a variable binder. The forall is a good example, except that it comes from logic.

My toy example for dynamic lambda abstraction creation is function iteration. Function iteration is simply defined as follows, where * denotes composition:

     F^1 = F
F^{n+1} = F^n * F


Now if we go logic programming, we don't have functions but relation. But we have easily code as data. So relation iteration can be expressed as follows:

     nth(F,1,F).
nth(F,N,X^Y^Z(call(G,X,Z), call(F,Z,Y))) :-
H>1, M is N-1, nth(F,M,G).


The ^ expression is the dynamic creation of the lambda expression. Unfortunately ^ is not yet part of some draft proposal for Prolog. But call/2,..,call/8 are already, and they are the counter part of function application.

Interestingly with some appropriate definitions for ^ we can force it to create and apply closures during execution. So that indeed we can do things like the follow:

      ?- nth(X^Y^(Y is X+1),2,F), call(F,1,N).
N = 3


There are some competing approaches to create and apply closures in Prolog at runtime. We find syntax variations and semantic variations. For example Logtalk uses >> with explicit free variables. Whereas I am preferring ^ with implicit free variables. My Prototype is not yet publicitly available, but will be out in a few days.

The exercise shows that starting with a language that has no closures, but code as data, we can simply reserve some code construct as a lambda abstraction. And closures don't need to be based on spaghetti stack, more on identification of free variables and can be easily constructed and applied at runtime if the free variables already share.

Currently there are some issues with optimized sharing (*), since both my approach and the Logtalk approach internally use copy_term/2. Depending on the Prolog system we can work with very large closures, or will be stuck to a low level because of memory and/or time problems induced by the copying.

On the other hand I think the approach quite flexible. I don't know some functional programming language, except maybe LISP that provides this flexibility. It seems that most of the other languages have lost flexibility on the way of their development, and burried it in some "Template Meta-programming" (**).

@gasche: I guess I should poke into your LtU reading list. Ultimate aim should be clear: Keep flexibility and gain performance.

Best Regards

(**)

Thanks, fixed.

### Would you agree with this

Would you agree with this summary?

• DS Libraries add specialized functions, while DS Languages add specialized functions and syntax.
• It is not essential, but very desirable, that DS Languages have their own error messages and do not allow access to the functionality of the host language.

That reading list was helpful. It has pushed me in the direction of thinking that DS Language-creating (or Language Oriented Programming) is not a distinct programming paradigm. My reasoning is that creating a DSL is an instance of implementing a tool, which is a task, not a paradigm, which is an approach to a task.

### Oz is a dataflow language

But I know of only one attempt at moulding general programming to the dataflow model (Lucid), and it didn't get very far.

Oz is another attempt. This is explained clearly in chapter 4 of CTMCP. The attempt is completely successful. It leads to a powerful methodology of concurrent programming in which you start with a deterministic default and add nondeterminism exactly where needed and no more.

### For this discussion...

I want to emphasize that I'm not passing judgement on the intellectual merits of logic programming or any other paradigm. I'm focused on why certain paradigms have been accepted as primary models for general purpose programming.

### Important principle

That was neatly put. For language design, we have the "principle of least surprise". Now I can think of paradigm choice in terms of the "principle of least pain".

C is not a great way to learn procedural, it is too low level. Conversely you don't have low level on your list. I would add assembly, building up from direct manipulation of hardware.

Forth which is also on your list also works for low level but I think the big thing is stack based. Other stack based options are RPL or Factor. If you have students that know any graphic design Postscript will teach them what they need to manipulate the internals of a PDF.

I'd drop Emacs lisp and use Clojure. You get all the LISP features plus an understanding of how to manipulate the JVM.

Mainly though, what's wrong with OZ for a multi paradigm class? You have a bunch of paradigms and a text book that supports that approach.

### Oz

Over there, I was trying to identify the significant programming paradigms and the best language for an introduction to each. Right here, I'm talking about what determines whether a particular paradigm becomes successful for general-purpose programming. I'll try to answer about Oz in relation to both.

I have twice studied programming texts based on a multiparadigm language, CTMCP with Oz and Multiparadigm Programming in Leda. I felt that in both cases, mixing so many different ideas together created an awkward language. I'm convinced that by far the best way to learn a new paradigm is with a small, clean language with a syntax crafted to highlight the significant concepts. Actually, I would prefer to stay with a language like that even after the learning stage.

So much for languages, what does this say about paradigms? Well, I guess a paradigm has to allow such a small, clean, clear language to be created for it. No, wait. What about C++? No, a paradigm definitely doesn't need to be represented by a clean language in order to become successful. OK, I've drawn a blank here...

### Limiting yourself to a single paradigm is completely wrong

I'm convinced that by far the best way to learn a new paradigm is with a small, clean language with a syntax crafted to highlight the significant concepts.

The problem with this approach, i.e., the crafted syntax, is that it makes learning multiple paradigms much more complicated: each new paradigm will have its own crafted syntax! There are very many paradigms that are useful, at least several dozen. Do you want to learn several dozen crafted syntaxes?

Actually, I would prefer to stay with a language like that even after the learning stage.

After the learning stage, when you are writing real programs, you need all help you can get! Any realistic program needs concepts that are usually seen as coming from different paradigms. All language concepts help you, why reject a bunch of them because they aren't in some paradigm? Why reject perfectly good language concepts?

### I disagree strongly. For

I disagree strongly.

For most normal developers, the single paradigm approach works best. Decision anxiety comes into play when there are too many ways to do something, while developers can benefit from an ideology to guide them (this is the Java-way, the Scheme-way, the...).

Best to learn multiple paradigms in separate language chunks each with their own distinct syntax. Syntax really isn't the problem anyways, its semantic diversity, which you can't really deal with using the "one true multi-paradigm language." The UW-way of teaching PL can essentially be described as language-of-the-week, and I thought that it was very effective.

Of course learning multiple paradigms is very useful. But you really don't see the LISP-way until you you've LISP, and then you can go off and apply that to C# if you see the chance, even if the syntax is not very convenient. But you only use FP in C# when it really is appropriate, b/c the language doesn't push you in that direction at all (even LINQ is very meh).

### Usually it's pretty clear

Usually it's pretty clear which paradigm you want for your problem: the least powerful paradigm that reasonably lets you express your solution. A good starting point is functional programming. If it later turns out that you want state (for example if you're doing stateful GUI components), you use state. A bigger problem is with languages that leave so many irrelevant decisions to be made. For example in F# you have many ways to do almost the same thing but not quite:

• var x = 0 vs let x = ref 0
• object vs algebraic data type
• class vs module
• list vs seq

Now there are different limitations on each of those, but in many cases either would do. Because it's not clear that one is better than the other the decision takes more effort even though it's not important.

### You are still not talking

You are still not talking about mainstream languages. The only multi-paradigm language that has gone mainstream is C++, and look what people think about that. There is what you believe, and what has held true so far. My feeling is that multi-paradigm has an intrinsic complexity that arises just from too many having choices.

### C++ is indeed monstrously

C++ is indeed monstrously complex but I wouldn't really call C++ multi-paradigm, unless you want to call it imperative + OO. C++ is again a case of having so many redundant features in a language that you indeed get decision fatigue because you have to make so many irrelevant choices.

What exactly constitutes a paradigm is very vague though, and so is whether a language supports a particular paradigm. Examples of good multi-paradigm languages are OCaml (functional core + objects + state), Curry (functional core + logic/constraint), and Oz (functional core + dataflow concurrency + laziness + state + objects + logic/constraint + distribution). Even though Oz supports so many paradigms, it is much simpler than C++. I don't think we need to limit ourselves to mainstream languages to assess language complexity.

Now you could argue that some paradigms are best provided as libraries. For example you can do constraint programming in a library. Haskell takes this to an extreme, where even state is a library (with monads).

### Perl

The only multi-paradigm language that has gone mainstream is C++

Perl? Also: PHP, Visual Basic, JavaScript, Delphi (Objective Pascal), LISP, PL/SQL, SAS all support at least 2 very different paradigms and did pretty well. I think you can make a pretty good case the majority of successful languages cross 2 paradigms.

### I agree with Jules.

I agree with Jules. Multi-paradigm needn't mean a salad toss of concepts that leaves you free to arbitrarily choose which you want to use. You can have "there's one (best) way to do it" in a multi-paradigm language.

### Multi-paradigm in a good way

I think we don't disagree all that much. Up above, I was arguing that in learning a paradigm, it helps to have a language dedicated to enforcing apropriate patterns. I think this was the main point over in Why Programming Languages?.

When it comes to production use, of course you need a language that accommodates multiple approaches. However, I do think it should be done as with C#, Python, OCaml, etc. where the language enforces one dominant paradigm but makes room to squeeze in others. If you give multiple paradigms equal footing, you get a language where different things don't look different.

### I agree with Jules too

Multi-paradigm just means having the right concepts for the job at hand. Anything else, and you end up force-fitting. Take a look at Java's class concept to see how much of a Swiss army knife it has become: abstract and final classes, synchronized objects (monitors at class granularity), classloaders (dynamic functionality at class granularity). In a multi-paradigm language, the concepts are cleanly separated.

### Crafted syntax can help

Some people think that the burden of learning a new syntax outweighs the benefit of having new semantic features clearly identified. All I can say is that this hasn't been my experience.

Illustrating the issue with samples from the 99 Bottle of Beer project. For Leda, the contributor did it four ways, by imperative, OOP, FP, and LP. For me, the Python, Ruby, Haskell, and Prolog samples give a clearer idea of the shape of code in each paradigm. With the Leda code, everything looks pretty much the same.

### Crafted syntax does not scale

Some people think that the burden of learning a new syntax outweighs the benefit of having new semantic features clearly identified. All I can say is that this hasn't been my experience.

Juris Reinfelds, a colleague of mine, taught programming at New Mexico State University for many years. He realized the importance of learning multiple paradigms early on (for reasons very similar to yours). He once taught a programming course with three languages: Java, Haskell, and Prolog. He found from practical experience that it was a lot of effort for students to learn three syntaxes and three systems, as well as for him to maintain three systems. He was greatly relieved when he found out that with Oz it could all be done with one language and one system. And that is only for three paradigms. There are many more useful paradigms than three.

Here's a list of some of the most useful paradigms, with a few languages, abstractions, and libraries that support them: functional programming (Scheme, ML), lazy/non-strict functional programming (Haskell, Curry), continuation programming (Scheme), declarative dataflow (Unix pipes, MapReduce), lazy declarative dataflow (Oz, Alice), functional reactive programming (FrTime), synchronous programming (Esterel, Lustre, Signal), message-passing concurrent programming (Actor model, CSP, Erlang), active object programming (Scala), coordination models (Linda and friends), object-oriented programming (Java), shared-state concurrent programming (Java, monitors), software transactional memory, imperative search programming (Snobol, Icon), relational and logic programming (Prolog, SQL), constraint programming (Gecode, Comet), concurrent constraint programming (AKL), lazy concurrent constraint programming (Oz, Alice, Curry), object-capability programming (E, EcmaScript 5). All of these paradigms have problems for which they are the right paradigm, and trying to solve the problems in another paradigm just forces you to invent a clumsy encoding of the right paradigm.

Although perhaps that was one of the posts on Perlis Languages that disappointed you :). Looking at your list the things I notice are the absence of a language with strong support for generics/templates, the more mainstream static OO approach and perhaps some kind of scripting language. At any rate there is a lot of overlap between languages so I wouldn't expect a large list.

The difficulty I encounter when talking about this is that, in my opinion, paradigms are broader than a single language but individual languages supporting the same basic paradigm still can be quite different for lots of reasons that may have more to do with the audience for the language than the language itself.

Also it was nice to see APL on your list as I think it is one of the few languages that is both interesting and where the interesting parts don't overlap with a lot of other languages.

My list over there had some deliberately unusual in/exclusions. I was hoping to generate some discussion of why things should or shouldn't be on such a list.

My definition of paradigm was probably unusually strict. I considered a paradigm distinct only if using it would require a different formulation of a solution. This caused me to drop things you might expect to see.

For example, I have been interested in term rewriting languages, especially equational programming as explored in the work of Michael J. O'Donnell. In studying the Pure language, I noticed my solutions would take the same shape as when I used Erlang. So although there was a different execution paradigm, term rewriting didn't seem to present any novelty as a programming paradigm. O'Donnell seems to have come to the same conclusion:

"...for the programmer there is nothing to choose between lazy functional programming and equational logic programming -- these are two styles for describing the same programming languages, rather than two different classes of programming languages."
Equational Logic Programming (1998)

Another case was with Factor. As I got into the language, the mental model I developed of it is that it was just Lisp stood on its head. The structure of Lisp code is a tree branching down, while Factor is a tree branching up. Of course, there was a lot of stack shuffling thrown in and the traditional conditional operators were replaced with combinators like cleave and spread. But problem decomposition seemed to be the same as when using Lisp. It's because of my experience with Factor that I doubt stack-oriented should be called a programming paradigm and I didn't categorize Forth as a stack-oriented language.

I'm wondering now if I was even too lenient. Is object-oriented really a paradigm in my sense? Everyone knows how Niklaus Wirth felt about OOP:

"...the old cornerstones of procedural programming reappear, albeit embedded in a new terminology: Objects are records, classes are types, methods are procedures, and sending a method is equivalent to calling a procedure. True, records now consist of data fields and, in addition, methods; and true, the feature called inheritance allows the construction of heterogeneous data structures, useful also without object-orientation. Was this change of terminology expressing an essential paradigm shift, or was it a vehicle for gaining attention, a â€œsales trickâ€?"
Good Ideas, Through the Looking Glass (2005)

Going back to the thread you mentioned, it wasn't so bad. But often, when people ask for a "Perlis" language, most languages suggested really provide only syntactic novelties.

### you do use paradigm in a stricter sense than I do

But even with my broader definition I never felt that the OO paradigm was significantly different from the procedural paradigm. I think there were significant improvements/extensions in OO languages but none of it was fundamentally incompatible even if it was cloaked by a lot of different terminology and some different syntax.

### This probably should be at the top...

...but reading PvR's paper mentioned just below, I see that what I'm thinking of as paradigms is what he calls concepts. His paradigms, I was thinking of as combinations of orthogonal paradigms.

Even taking that into account, it's interesting how two people can get things so differently when trying to cut nature at the joints.

I watch a lot of arguments in PLT run aground on loosely defined terms. But hey, we'd have to write a book length preface for every post otherwise.

The word 'paradigm' has no meaning at all. It has been overused at excess, and now everyone is and has a paradigm. The technical discussions beneath are valid (and your precise methodology of how you select your languages meaningful), but you can have no hope of non-"loosely defined term" when you use 'paradigm'.

Use of 'paradigm' has substance to it, imho, just to the extent it's grounded in Kuhn's The Structure of Scientific Revolutions. Use of the term has (as I believe he noted in the second edition) become a status symbol, and this is nowhere more evident than in programming languages, where the substantial uses are lost in an ocean of frivolous uses — but the substance is there.

### The link I should have

The link I should have thought to provide with that: Going co-nuclear.

### Kuhn

I hope this doesn't annoy BOTH of you (gasche and John Schutt), but (a) IMO "paradigm" is probably a useful word, and yet (b) Kuhn himself admitted (in the very book you cite) that his own use of it was vague, and that he himself had given it many different meanings in "Structure". I can't remember exactly what he said, and I'm sure he didn't say it was HOPELESSLY vague, but sadly the meaning of the word is still a bit of a mess. I hope someone writes a nice authoritative book that clears it up one day or, more likely, replaces it with a new term. (E.g., Kuhn's own "disciplinary matrix" is a bit less ambiguous.)

Interesting stuff. I used to teach this, and I think Ehud has done some teaching in philosophy of science too, no?

### It used to be a pair-a-dime

It used to be a pair-a-dime was worth something.