Ivory Towers and Gelfand's Principle

I read today a rant on pedadogical philosophy by Doron Zeilberger via Philip Wadler's web log.
The first one, due to my colleague and hero Israel Gelfand, I will call the Gelfand Principle, which asserts that whenever you state a new concept, definition, or theorem, (and better still, right before you do) give the SIMPLEST possible non-trivial example. For example, suppose you want to teach the commutative rule for addition, then 0+4=4+0 is a bad example, since it also illustrates another rule (that 0 is neutral). Also 1+1=1+1 is a bad example, since it illustrates the rule a=a. But 2+1=1+2 is an excellent example, since you can actually prove it: 2+1=(1+1)+1=1+1+1=1+(1+1)=1+2. It is a much better example than 987+1989=1989+987.

While writing a reply on LTU, I thought about modifying this principle to programming language design: If an example has a solution that is nearly as good without a given language feature, then that example is not a good motivation for that feature. Perhaps not following this principle is partly what earned FP it's ivory tower reputation.

Researchers love to generalize the heck out of any given feature, to add linguistic support for minor issues, etc. etc. It seems that as a result, PL researchers, particularly FP researchers, have a (deserved and undeserved) reputation for solving problems that aren't really there.

I'm not suggesting that every exploration must be practically motivated. Indeed, it's fun and educational to explore what can be done for its own sake. Was laziness ever practically motivated? I'm not aware that it was, but it certainly lead to some very important breakthroughs, most significantly how to satisfactorily deal with impure effects in a pure language.

However, when it comes to promoting a given language for wider use, features should be chosen to solve practical programming problem, preferably the problems that a niche finds unsatisfactory. (Or should find unsatisfactory... customers usually don't know what they need.)

What practical problems in what niches are LTU readers aware of?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The hidden information problem.

Most of the problems I've come across fall in the category of the hidden information problem. Real programming problems exist no more, because most, if not all, of them have already been solved in one way or another.

By hidden information, I mean that a programmer does not know all the pre-conditions and post-conditions that another programmer knows, and that creates huge problems:

-big projects fail because there is not enough documentation. For example, a null reference that one programmer sets may bring down whole systems because another programmer assumes the same variable as non-null.

-the 'quick and dirty' hacks of one programmer fall flat in the face of another. For example, when a programmer caches in a static variable the results of a computation, and another programmer fails to read the data from the cache.

-programmers fail to realize all of the pre-conditions of an environment. For example, a stream can not be read if it is null.

-there is not enough documentation on the pre/post-conditions of an environment. For example, all the incompatibilities of the various web browsers

From what I have seen in the past 9 years of my professional experience, 80% of the time spent in a project is divided between making information visible and debugging the project due to hidden information. The rest 20% goes in the design of the application, which, in most commercial settings, is almost non-existent (most probably due to the fairly dynamic nature of the environment), and the actual coding. Maybe I am optimistic about that 20%...

That's the reason I said in another thread that 'OOP has failed'. No matter how elegant OOP is, it does not help at all in the information hiding problem: it's not at all understandable by looking at an OO program about what the program does, especially in big projects. Web apps are even worse in that, because most of the programming is done outside of a programming language and through XML/HTML.

One of the reasons that FP is better is that it tends to make the information hiding problem less important: the programmer does not have to remember why/how/when state changes. Of course FP fails due this particular constraint: it is not possible to maintain large data sets efficiently in FP programming languages, due to not being able to update data.

Now that I am referring to it, how FP does the MVC pattern? in imperative languages, I can keep all my data in one place, then invoke callbacks (events/listeners/observers/signals/whatever) when the data are updated. The MVC pattern is the most used one in commercial applications, both desktop and web clients. It's elegant, and easy to understand. How is that done in FP?

Knowledge based programming

A solution to the hidden information problem is to express a program or system as a knowledge base. The idea has many possibilities but is difficult to develop because, well; nobody wants to try it! But it isn’t really new. What is needed is a fresh look at the existing technology in terms of the goal of programming.

Another solution is to write

Another solution is to write down the pre-conditions and post-conditions inside the program, and have the compiler/run-time check them as the program is compiled/run.

Here is an interesting article about programming and its problems (it also discusses why OOP failed):

http://java.sun.com/developer/technicalArticles/Interviews/livschitz_qa.html

Of course this discussion will not evolve around the actual problems of programming, how to practically solve them and FP applications, but it will evolve on mathematical issues, since most LtU members love theoritical discussions...and thus the 'ivory towers' will be proven...

Another solution is to write

Another solution is to write down the pre-conditions and post-conditions inside the program, and have the compiler/run-time check them as the program is compiled/run.

Not a knowledge base exactly but a step in that direction. Rules and Facts could represent the pre and post conditions and an embeded rule system might evaluate those conditions.

My carpets are known the world over.

Perhaps not following [Gelfand's] principle is partly what earned FP it's ivory tower reputation. Researchers love to generalize the heck out of any given feature, to add linguistic support for minor issues, etc. etc. It seems that as a result, PL researchers, particularly FP researchers, have a (deserved and undeserved) reputation for solving problems that aren't really there.

I was surprised to see you come to this conclusion, because "add[ing linguistic support for minor issues" is precisely what I associate with scripting languages, which typically come with tons of syntactic sugar. Similarly, conventional languages like C have lots of features which can be encoded using more primitive constructs, for example, reducing for to while, or all encoding all control constructs using continuations. OTOH, I associate FP with minimalism.

Take Scheme. It explicitly states its goal of minimalism in the standard. It's factored into a set of primitive and derived expressions. It's based almost directly on a formalism, the λ-calculus, which is widely regarded to be a minimal standard for computational adequacy.

When I look at good FP papers, I usually see people encode new features in terms of existing ones in order to avoid introducing unnecessary constructs.

When I think of typed languages, I think of things like products, sums, exponentials and initial algebras, all of which are defined by universal properties, which is an instance of picking the minimal exemplar in a family of suitable structures. When I think of categorical models, I think of classifying and free categories, which again are minimal models. Category theory itself is also used in many FP programs, and few people would deny that it is a minimalistic formalism.

To me it seems that FP researchers concentrate far more on fundamental problems --- problems of computation and program structure --- than researchers or programmers of any other paradigm, except perhaps LP.

Was laziness ever practically motivated?

Sure. For instance, laziness makes programs terminate more often than an eager regime. I think termination is a pretty practical property, don't you?

when it comes to promoting a given language for wider use, features should be chosen to solve practical programming problem, preferably the problems that a niche finds unsatisfactory. (Or should find unsatisfactory... customers usually don't know what they need.)

Frst of all, FP is not only about programming computers. Lots of people study FP or related topics as a mathematical or metamathematical subject. Complaining that these people are not solving your programming problems is as misguided as saying that application X does not solve my research problems. I don't expect Emacs or Mozilla or whatever to support my research efforts, except perhaps in a very indirect, facilitating fashion; if you think a research effort is dubious, you might consider it in an analagous light.

Second, you observe that "customers usually don't know what they need." I think this is an attitude some FP researchers at least might share; Dijkstra certainly did. They target their efforts at problems that they feel — but you perhaps do not realize — are fundamental. Of course, they might be wrong, and sometimes they are; but so could you be.

Third, the notion of "simplest possible nontrivial example" in Gelfand's Principle can mean different things to different people. It assumes an underlying system, with a simplicity metric. Perhaps you find impure effects simple; I do not. I think many people find impure languages simpler because they think they can reason better about them, but I think they usually cannot. They don't understand the system as well as they think they do. Their metric for simplicity is familiarity, not ease of reasoning.

I use Gelfand's Principle myself all the time, although I didn't know it had that name. However, I know when I'm doing it that it is a question of perspective — unless it is something well-defined, as for example trying to understand the free algebra in a certain class.

BTW, this proof has a nonsense step: 2+1=(1+1)+1=1+1+1=1+(1+1)=1+2. It should be: 2+1=(1+1)+1=1+(1+1)=1+2 (use associativity in the middle).

Update: I said the proof above is nonsense, but now I understand the intention; it actually illustrates why the "simplest possible" metric is artificial. I think the intention is to characterize numbers as numerals in unary notation, so you get 1 + 1 + 1 as an expansion to a sequence, and commutativity is a theorem. But you can instead take commutativity as an axiom, in which case numerals are bags. Then the simplest proof is to use the commutativity axiom itself, and this is trivial, so it doesn't satisfy Gelfand's Principle.

It's all in how you axiomatize it; this is what I meant by the principle assuming an "underlying system".

Neko no me no you ni kawaru

For instance, laziness makes programs terminate more often than an eager regime.

What about this statement by Scott?

Define "minimal".

For me personally, anything that does dynamic typing almost automatically falls into the "unnecessarily complicated" camp.

What I'm trying to say is that notions such as "simple", "intuitive" and "easy" only make sense in the context of a well-defined computation model.

And if you don't have a strictly fomralized computation model, then you are probably doing something wrong.

I'll bite

For me personally, anything that does dynamic typing almost automatically falls into the "unnecessarily complicated" camp.

How is having just dynamic checks more complicated than having both static and dynamic checks? I assume we are talking about type-safe (sound) languages here, right?

Well.

Briefly, before I get too verbose: assuming a "datatype" is simply some sort of logical condition that imposes structural constraints on a member of the set of strings over an alphabet, (i.e. a chunk of "data") then there should be no reason why these condtions should be evaluated at run-time.

Anything that needs special tags and checks at run-time to distinguish arrays from stacks is bogus in my mind. (Though I grant that from a practical point of view it doesn't really matter. Use what's at hand.)

I guess it's more a question of compiler-writing laziness more than anything.

What about the size of the ar

What about the size of the array? If you have "read(n); A[n] = 1" you need to check this at run-time.

Hm

I'm thinking bounds checking isn't part of the type system. (Witness OCaml.)

I'd like a better definition of "type" actually. :)

Fascinating.

How would it statically handle the grandparent example above?

(I'm thinking that it wouldn't; am I wrong?)

Practicality.

Honestly, I think we are largely on the same page. Please realize that I am playing devil's advocate a bit... After all, I love functional programming, I certainly think pure functions are much simpler than impure effects, and I'd love to see FP used more in industry. :-)

I am not trying to put down programming language research. Rather, it seems that we oftentimes wonder why we haven't had more influence, and that is the aim of my speculation.

This post was partly inspired by the paper Why Attribute Grammars Matter. Though I've never applied attribute grammars, I have a pretty good idea of what they are. I don't believe that they are useless, rather, my opinion is that the author didn't make his case. He didn't make his case for reasons roughly analogous, in my mind, to why 4 + 0 = 0 + 4 is a poor illustration of commutativitity.

I don't believe it's the linguistic features that really attract most people to the likes of Perl, Python, and PHP. Rather, these languages are popular because they are popular, and also because of existing libraries. It's easy to access the underlying operating system, to connect to a database, to send an email, to fetch a web page, or to write a CGI script. If you aren't sure how to do it after reading the documentation, examples and experts abound.

Many of the features of these languages are chaff, but some are not. For example, built-in dictionary types are very convenient for practical programming, and I appreciate the syntatic sugar that is provided for these ubiquitously useful structures. However, I've had more than one FP researcher suggest that everybody should implement (or has implemented) their own dictionaries.

I refuse to use Perl. I can usually stomach Python, though it's far from my favorite langauge. I hate PHP. It's incredibly loose and encourages sloppy programming in both correctness and organization. Yet I must admit that with PHP, you can do common web-app tasks with virtually no fuss or configuration. That's why all these languages are popular within their niche: because they've put a lot of effort into making the common tasks easy. Indeed, Erlang did exactly this.

Take Scheme, a great deal of time and effort has gone into researching macros. Hygenic macro systems such as syntax-case are a major accomplishment. However, macros are of use to library writers and guru programmers, not so much common tasks and common programmers.

If SCSH had a friendlier name and came out 1986 instead of 1994, maybe Scheme would be comparable in popularity to Perl and Python. Is Scheme any less of a language for this? I don't think so, but if popularity and influence really is a goal, it's something to consider.

SML's equivalent to macros would probably be functors, and Haskell has *many* equivalents. Numerous language extensions have come and gone in assorted versions of assorted implementations, which makes bit-rot in Haskell programs extremely common.

Sure. For instance, laziness makes programs terminate more often than an eager regime. I think termination is a pretty practical property, don't you?

Of course I do, but eager evaluation does not stop me from writing programs that terminate. Thus, "more programs terminate" argument is interesting from a theoretical point of view, but not practical one, even though termination is a practical concern. Furthermore, laziness is not too practical in general; people who want to write efficent Haskell programs find themselves littering code with strictness annotations. I'd much rather have a pure, strict language I can make lazy when appropriate.

As for taking commutativity as an axiom, you would then have to prove that the addition algorithm actually forms a bag. The context seems to imply the underlying system is the concrete integers, not an abstract algebraic structure.

My point is ultimately this: pure PL research is fine, but then we shouldn't wonder why we don't have so much popularity or influence outside of academia. If we want to influence a niche in industry, then we need a good grounding in what the practical problems really are, and we need to effectively argue that our languages solve these problems.

Practicality

For instance, laziness makes programs terminate more often than an eager regime. I think termination is a pretty practical property, don't you?

No.

While termination is certainly a desirable quality to know and reason about, there's really no way in which termination issues make it into the to list of twenty most important problems in software today. Many of the other purported advantages of lazy functional languages (elegance, minimality of syntax and semantics, orthogonality of constructs) just aren't big issues for most working software engineers. If termination is the best pitch for lazy evaluation, rational software engineers will continue to ignore it for production use. Better pitches would focus on reliability, maintainability (esp. separation of concerns), testability, and scalability. Those are the practical issues that new languages and paradigms could be (and are) addressing.

Let's be honest.

As a professional developer, I would like to see how to program applications with functional programming languages. How to open a database, store the data in my program's memory, do a GUI (preferrably a Model-View-Controller) over them, post a response in an HTTP request, do a transaction etc etc. Can we do all of these in FP languages? where are the examples? are FP languages so superior in a way that justifies the cost and effort to learn and use them?

If FP languages think they deserve a better place, they must show their goods.

Purity vs. Impurity, Redux

If you are concerned with pure solutions, I'd look at Kleisli and HaskellDB for the database/transaction questions, Functional Reactive Programming with a hopefully-more-complete wxFruit for the GUI work, and perhaps Haskell HTTP for posting responses in HTTP requests.

Now, personally, I'm not concerned with purity and am not a Haskell programmer. So far I've been too lazy to reproduce HaskellDB's combinator library atop ocaml-mysql or ocaml-odbc, although it's clearly a good idea. O'Caml has very good bindings to Tk and Gtk+, but no Functional Reactive Programming library that I'm aware of—another good idea that I don't have time to do myself. I'm also very happy with netclient for O'Caml. None of these are pure—not even close—but because they're in O'Caml, I can much more easily encapsulate their risky bits in whatever way is natural to the application I'm writing than I could in a language that wasn't as fluidly supportive of each of the functional, object-oriented, and imperative approaches to programming.

This is not what I want.

I don't want links to libraries...I've had those before.

What I want to see is implementations of applications using those libraries together. I want to see how are these libraries used. Is there any benefit in using these libraries? do I have to write less code? is it a more flexible way? can I apply the MVC pattern consistently at all levels?

For example, is there a phonebook application written with a FP language? I would like to study the source code of that.

Next Steps

axilmar: What I want to see is implementations of applications using those libraries together. I want to see how are these libraries used. Is there any benefit in using these libraries? do I have to write less code? is it a more flexible way? can I apply the MVC pattern consistently at all levels?

The point behind the links is precisely that you can pursue whatever you wish to pursue however you wish to pursue it. Maybe the examples of the use of the systems in their distributions, whether in the form of documentation or of buildable examples, will be sufficient. Maybe it won't; maybe you'll have to try to combine them yourself. Absent that, all I can do is something else I've done numerous times before: describe my personal experience using, in this case, not these exact tools (cf. "I'm not a Haskell programmer"), but other functional tools.

axilmar: Is there any benefit in using these libraries?

Yes.

axilmar: do I have to write less code?

Yes.

axilmar: is it a more flexible way?

Yes.

axilmar: can I apply the MVC pattern consistently at all levels?

No, nor would you want to: MVC is first of all an artifact of the class-based object-oriented paradigm that doesn't translate well to other domains, and secondly traditional MVC even has issues within the class-based object-oriented domain, as exemplified by the choices made in the Java Swing toolkit, the fact that Morphic replaced the MVC system in Squeak, and the tendency to conflate "Model" and "Controller" in J2EE applications, particularly EJB-based ones.

With that said, I have to imagine that what you're after in the pursuit of MVC as a design pattern is an appropriate separation of concerns in a system. That's certainly a desirable property, and it's precisely one that, in my experience, functional systems provide in a vastly more direct and flexible way than systems that heavily emphasize object-orientation with shared state. MVC can be readily implemented in terms of two more general patterns, the Subject/Observer and Mediator patterns, with the "Controller" being the "Subject" to the "Model's" "Observer," and the "Model" being the "Subject" to the "View's" "Observer," with some kind of "Manager (Mediator)" managing the lifecycles and relationships of these objects. An excellent example of an implementation of these patterns in C++ is the Boost Signals library, which provides typed signals and slots with lifecycle management, flexible means of combining the results from multiple slots responding to a signal, etc.

Since I'm an O'Caml programmer, I prefer an O'Caml solution to the problem (although I will happily use Boost Signals on a C++ job). From On the (Un)reality of Virtual Types:

Abstract: We show, mostly through detailed examples, that object-oriented programming patterns known to involve the notion of virtual types can be implemented directly and concisely using parametric polymorphism. A significant improvement we make over previous approaches is to allow related classes to be defined independently. This solution is more flexible, more general, and we believe, simpler than other type-safe solutions previously proposed. This approach can be applied to several languages with parametric polymorphism that can already type binary methods and have structural object types.
The "detailed examples" in question revolve around the Subject/Observer pattern; the "allow related classes to be defined independently" serves as the expression of the "Mediator" pattern. The concrete example used is of a Window/Manager pairing, a straightforward example of what would fall under the MVC paradigm in, e.g. the traditional Smalltalk-80 context.

To summarize, my position is simply this: object-orientation is only one dimension along which to make design decisions, and the way I've seen object-orientation done in C++ and Java over the years hasn't demonstrated that object-orientation alone, without considerable amounts of application of convention, received wisdom, crystalized experience such as "design patterns," etc. is all that beneficial. That leads me to wonder at what point it makes sense to attempt to concretize as much as possible of that convention, received wisdom, etc. at the language level. You see some of that in the observation that in the GoF book, "16 of 23 patterns have qualitatively simpler implementation in Lisp or Dylan than in C++ for at least some uses of each pattern." So the irony may be that some aspects of the solution space exhibit a "Back to the Future" flavor, since functional languages like Lisp and ML got some of this right back in the 1950s-1970s. But again, I'm no more a functional purist than an OO purist; I like to use a functional approach when it makes sense, an OO approach when it makes sense, an imperative approach when it makes sense...

MVC

Using google I found these slides about
wxHaskell. There is an example of MVC near the end of the presentation (slide 53).

Bunch of examples.

Examples in O'Haskell.

* An interactive drawing program
* An AGV controller
* Sieve of Eratosthenes
* Semaphore encoding
* Queue encoding
* Delay timer
* A Telnet client
* A telecom application

[Edit: More examples, Scribble and Solitaire among them.]

Nice examples, but by looking

Nice examples, but by looking at them, I haven't see anything that can not be done with +-10% effort in other languages.

Here is a real case: my company wants to write some web apps. Right now we are using J2EE: JBoss, Apache, Hibernate, JSP and Java Server Faces in the future. The J2EE environment is difficult to manage, but we get nice scalability, good throughput, good resource pooling and interoperability with .NET, COM objects, etc etc. Is there a similar Haskell environment with these capabilities? is that Haskell environment so much better than what we are using now to justify the change?

I hope I am not misunderstood. I am talking purely from a practical point of view. From a theoritical point of view, FP languages are the best thing since sliced bread.

What is this "theoretical poi

What is this "theoretical point of view" everyone talks about?

We're not talking about natural sciences here, you know; I don't see how software can be delineated into "practical" and "theoretical". (Strictly speaking, all software is inherently theoretical.)

Timescale

I would say that theoretical/practical spectrum maps to the expected payoff time - purely practical means it pays your bills today, but deteriorates quickly. Purely theoretical means it will pay your bills in infinity :-)

Everything else is in the middle.

Hm.

Good point, actually.

Still, I have this idealistic idea that the only way to pay the bills in infinity is to start by baby steps, paying some small part for today. :)

It is impractical to quibble about practicality

I don't see how software can be delineated into "practical" and "theoretical". (Strictly speaking, all software is inherently theoretical.)

"Practical" does not mean "physical".

A working definition for "practical" as it relates to software would be "software that does things that people who don't care about software still care about."

"Theoretical" can then mean "of interest almost exclusively to those who care about ideas of how to build software".

Pretty simple, pretty effective. Anyone seeking serious discussion could probably get on with it. I hope you like it.

My point.

The whole "practical"/"theoretical" debate thing comes from the natural sciences and thus doesn't apply to computing in any meaningful sense.

We all want software to work well, whether we are developers or not. And if it doesn't work well -- well, then I just don't care for it, even though I "care about ideas of how to build software". I'm sure I'm not alone in this.

'Practical' means that I have

'Practical' means that I have a set of tools that, even if not theoritically proven correct, work exactly as I want.

'Theoritical' means that I have a set of tools that, even if proven correct by application of mathematics, they are still inadequate for the task at hand.

This whole thing translates to "I don't want to use Haskell if I have to implement the whole functionality of J2EE myself". It does not make sense, from an economic point of view.

Correct me if I'm wrong...

...but "proof by demonstration" is actually more stringent in many cases than "proof by application of mathematics".

Which basically means that if you can demonstrate the correctness of J2EE for the average case, you have satisfied the "mathematical" bits as well, without expending extra effort.

In other words, the 'practical' approach to software is actually more stringent mathematically than the 'theoretical'.

This is Mistaken

The best counter to this that I'm aware of is Oleg Kiselyov's writing on the covariance problem in object-oriented programming. Basically, there's a rigorous sense in which, given object-oriented programming with state, your compiler cannot help ensure that your code does not violate the Liskov Substitutability Principle, even for types with relatively trivial algebraic properties such as Bags and Sets, let alone for types with considerably more complex algebraic properties as found in the real world. As Graydon Hoare wrote, "then I read oleg's old page about many OO languages failing to satisfy the LSP, and realized how important and overlooked this critique is. the basic result is that subtyping in most OO languages these days is behaviorally wrong, and when it works it is working only by accident."

And this is what I fight in my work, the kind of ill-informed acceptance of ill-performing software based on the misapprehension that it's just unimportant to understand computation, or that dealing in kinda-sorta approximations of correctness is OK. Are we really supposed to just shrug it off when "when it works, it is working only by accident?"

That's not quite what I meant.

Simply, a proof-by-construction is oftentimes more "rigorous" than a more "mathematical" proof.

I don't really believe in the magic of math, and especially so when computing is concerned.

Makes Sense to Me

Ah, stated this way, I'm inclined to agree: that is, I put greater faith in "intuitionistic," "constructive" mathematical logic than in other styles, i.e. I feel comfortable rejecting the "law" of the excluded middle and insisting that that which we can know is neither more nor less than that which we have proofs for.

Having said that, most software engineering doesn't come close to following any category of logic, so a real conundrum remains in practice.

Speaking of which...

The best counter to this that I'm aware of is Oleg Kiselyov's writing on the covariance problem in object-oriented programming.
This reminds me... Are there any statically-typed OO languages that separate an implementation inheritance facility from a notion of subtyping? I don't know of any, but that doesn't mean much...

Using a Java-like syntax and Oleg's example, I'm thinking in particular of something like:

  interface Set { ... }
  interface Bag { ... }

  // type of BagImpl is a sub-type of Bag
  class BagImpl implements Bag { ... }

  // type of SetImpl is a sub-type of Set, but has
  // no relationship to types of Bag or BagImpl
  class SetImpl extends BagImpl implements Set { ... }

...Ocaml does. It uses struct

...Ocaml does. It uses structural typing for objects.

I'm not sure I get it...

I would have thought that structural typing would actually work in the opposite direction... In the original example, Set and Bag might have identical interfaces, structurally speaking, but we want to capture some essential semantic distinction anyway, so that neither is a subtype of the other, even if one is implemented in terms of the other. It seems like this is a natural case for nominal typing rather than structural...

On the other hand, I think I must be misunderstanding something, because I freely admit that I don't know much about the Ocaml object system. I spent a bit of time with the docs, but not enough to convince myself either way. Any chance I could convince you to cook up an example?

Sather does it

See this page in particular.

Cool!

It looks like this is exactly what I had in mind. I've heard of Sather, but don't know much about it... Do you know of any consensus on this feature? Good, bad, ugly?

Nope!

I don't know much except from reading about it from the web. AFAICT it's expressive but cumbersome.

O'Haskell/Timber does it.

See this page for a brief explaination of the typing part. To inherit implementation the programmer makes a selection (by assignment) of which methods coming from which parent. Are there languages out there which can automate the implementation inheritance if the programmer so wishes, except for the conflicts?

Practical libraries

Time changes all things. I doubt the statement "I don't want to implement the whole functionality of J2EE myself" would have made much sense some 5 to 10 years ago. Indeed, the same line of argument about Java having immature libraries was used by many in those not long ago frontier days. A parallel argument is often made about COM or .Net.

My POV is that there are (a) unfilled niches where the mainstream languages are inappropriate; (b) the mainstream languages will be influenced by these "theoretical" languages; and (c) I get paid for things nobody else is either capable or willing to do - meaning it is sometimes rewarding to be different.

Anyhow, I find the line of argument that one will not use a PL unless it has feature X first to be irritating. First, most PL design is done by researchers and volunteers. Researchers are interested in advancing the state of the art. Volunteers are interested in solving their own niche concerns. Although wide scale adoption would be nice, there is not a formula that says if you add feature X, the world will come rushing to your door. Indeed, if you add a feature, the requester will then shift and say, well I would use the PL but it doesn't have feature Y (or Z or et al).

As in the Open Source world, if you want a PL feature, the suggestion is you've got the source and the tools, why not write it yourself and make it available to others. It's been a pretty good model for Linux and Apache. Anything else is just a recipe for saying I will never use a non-mainstream PL because it's not popular.

Formal reasoning.

You asked if a number of things were possible in a functional language. Yes, many things are possible in a functional language, as shown by the links I provided. I avoided your question if functional languages were superior, since it is a great start of a big flame war based on personal beliefs.

Your claim however is that it is at most an additional 10% effort to do it in other languages. I would love to see some formal reasoning about your current software.

Simon Peyton Jones has Calling hell from heaven and heaven from hell available on his homepage, if COM objects is what you are interested in.

You asked if a number of thin

You asked if a number of things were possible in a functional language.

Actually, I did not ask if a number of things are possible in a functional language, because I have asked that previously and got a wide list of links that covered my question.

What I asked is 'what is the benefit of migrating to an FP language environment'. To me, the cost of migration simply outweights the benefit of correctness, for the task at hand (i.e. distributed business apps). Maybe if the task at hand was more 'serious' (for example, launching a rocket or maintaining a nuclear site) then the benefits would outweight the cost of migration.

I avoided your question if functional languages were superior

As programming languages, FP languages are indeed superior, due to being based on mathematics. But a programming language is more than its compiler: economics dictate that organization, support, libraries, documentation, etc play a great role, that is, if not equal, at least as important as the compiler.

Your claim however is that it is at most an additional 10% effort to do it in other languages. I would love to see some formal reasoning about your current software

I tried to write the scribble application in a way that is similar to scribble.hs, in C++. For example, I created the tool widgets with a function on a table of colors, instead of creating them one by one. The result was similar in lines of code. Then I inspected the rest of the examples, and I did not see anything that I couldn't do with C++ with similar effort. Of course this is intuitive rather than formal, nor I claim that I understood 100% of examples.

Well-Stated

First, let me say that I think this is a very good articulation of the issues in applying functional programming to production systems in a given domain, in this case, web applications. I think the challenge in responding to it revolves around the fact that very few of us are given the opportunity in a production setting to apply what we may have very good reason to believe is a better alternative to the way things are traditionally done. Certainly, when I reflect on my experience in the J2EE world, it isn't lost on me that we have terminology and tools like "stateless session beans," that often what we end up doing is, again, doing by convention/received wisdom what is formalized in other software development contexts.

So what this means is that it's rare to have the opportunity to point to something concrete that we've built, e.g. with Haskell or O'Caml, that has been built at the kind of scale where the benefits of the conceptual basis, and the motivation for the tool choices, stand out in anything resembling a transparent way. Those of us who prefer Haskell or O'Caml, therefore, owe Paul Graham a debt of gratitude for being the standard-bearer and pointing out that having developed Viaweb in Lisp was a big win. The challenge before us is to place ourselves in a position where the next Viaweb can be in Haskell or O'Caml. There's certainly no reason that it can't be, given tools such as HSP/HaskellDB or AS/Xcaml. But unless you're an entrepreneur, it's hard to gain control over those choices.

Scheme

In particular SISCWeb would be relevant to your task.

here

Right now we are using J2EE: JBoss, Apache, Hibernate, JSP and Java Server Faces in the future... nice scalability, good throughput, good resource pooling and interoperability with .NET, COM objects, etc etc.

Is there a similar Haskell environment with these capabilities?

no. if you want java, you know where to find it.

if you want to develop web applications with Haskell, you should first drop the java mindset and bloated way of doing things.

Valid Point Missed

rmalafaia: if you want to develop web applications with Haskell, you should first drop the java mindset and bloated way of doing things.

Call it bloated if you like (and I certainly do think of, e.g. EJBs as bloated, but I haven't used them in years), but the valid point remains that there are lots of people who know how to use J2EE technology to build web applications that can handle millions of unique users per day, and when they need to scale up, it's a matter of adding a new box to the network and editing a single configuration file. I happen to be one of those people.

Now, I happen to believe that functional technology actually makes that process easier than it is with Java. The Java community has really only gotten over EJBs and gone to lightweight containers with Inversion of Control, and persistence of POJOs (Plain Ol' Java Objects) with tools like Hibernate, over the last few years. The functional community had already learned many of the lessons that the Java community spilled their own blood over, but the only serious attempt I'm aware of to reproduce (and perhaps expand upon) the capabilities of the J2EE environment that I'm aware of in a mostly-functional setting is AS/Xcaml. It would be nice to see other such efforts from the Haskell, Clean, Oz... communities.

Excellent example of ocaml: mldonkey

I'm surprised no one has mentioned this but one of the most visible examples of a mature application written in OCaml from which you can examine its source is MLDonkey.

For other real-world applications of FP
http://caml.inria.fr/about/successes.en.html

Model-View-Control as an example of functions as first class val

The book "How To Design Programs" use Model-View-Control to make a GUI right after introducing the fact that functions can produce functions.

--
Jens Axel Søgaard