The breaking point of language usability?

The indefatigable Bruce Eckel is learning all about Java generics so he can write about them in a way that explains things to mere mortals. It is clear to me that the Average Joes who have been using Java are going to have their minds blown by such things, and I wonder if Java has taken a large step along the pirate ship plank off to C++-like complexity and confoundedness? Along this vein of thought, advanced functional languages don't get much use in industry and I think people attribute it party to their tougher learning curve.

So my question is, at what point have you made a language that regular folks simply won't be willing to learn? And the challenge is, how can languages be designed to give advanced benefits yet hide the complexities? Why can't machines better hide issues like co vs. contra variant type usage (or whatever)?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

A thought of my own

Basically, I think if people must learn any theory at all, you've gone past the "if it bends, it's funny" into the "if it breaks, it's not funny" realm. People are lazy - I'm proof of that, and I consider myself to have more interest in learning how languages do and should work than like, well, 60% of programmers?

Yes

I didn't see your comment when I posted below. I agree 100%. It seams that as soon as you have to actually think a little bit, many people just turn around and walk the other way ("I don't get it" or "So, what's the point? I can do that in language X too.") and they miss the whole point. The sad thing is, most of these people will never experience the mind bending euphoria of the "I get it!" moment and the effect it has on your thinking for the rest of your life. (The kind of experence I had to a certian extent learning Lisp in school, and again to a greater extent when watching the SICP lectures, especially the lecture on "delay" and "force". Wow! That is great stuff!)

Functional languages hard?

I wonder, are functional languages really hard? I'm not convinced they are intrinsically hard, just hard for those who's native (programming) language is imperitive or who don't have a solid understanding of foundational mathematics. What I mean is, if someone has a solid understanding of basic math principles, that foundation in mathematics will make learning functional languges much easier (compariable to imperitive languges), and the benifits of a solid understanding of the mathematical concepts foundational to computer science is benificial to a programmer regardless of whether you do functional programming or not.

To answer your question: I'm not sure when you reach that point. A lot of people were (until Java came along and took the game) willing to learn C++ despite its legendary complexity, and I'm not sure that Java converted a lot of C++ programmers because it was simpler than C++. Hype and marketing, in my mind, were much more influential in Java gaining popularity over C++.

Basic

I used to be a teaching assistant at a rather good school, and even there I encountered many students who lacked the so-called "basic mathematical maturity". They are now probably happily employed somewhere.

A ton of people (in the industry of course) admitted to me that they didn't like math (especially proofs), but they enjoyed and were good at programming.

I don't think the "extreme mental load" is that far away. A lot of people didn't understand Lisp, STL, even OO quickly (or at all, sometimes).

We will never know who should not mess with Java's generics until they shoot themselves in the foot, or quit using it with frustration.

PS: C++ now seems complex, but in the day nobody even used that word (iirc). It was OO and the right thing to do, it was going automatically deallocate memory via destructors and you could keep writing plain C as well if you so desired.

Anecdote re: proofs

Doing proofs can be laborious, for sure. However there is a small chance that good tools can take the edge off, if not make outright fun. A small example: in college there was an elective class in logic using an interactive program to help step through some rudimentary logical proof development. Folks who used it honestly said it was kinda fun, but I'm sure if they had to do it with pen and pencil they would not have even started.

I think there is plenty of room to come up with systems and tools that take some of the bleakness out of the picture. It would probably require real experience with usability and AI-ish things, but it isn't utterly untenable.

Or maybe the problem is with the students

I'm an avid lisper (I plan to learn haskell and/or OCaml soon, so that might change ;). My girlfriend just completed her degree in anthropology and wanted to understand what I was doing when I typed on my laptop all night, so we decided she would learn Scheme with SICP. Admittedly, she used to be a strong student in High School, but she still didn't seem to find recursive and functional code itself hard to learn. In fact I'd bet she'd find the iterative version of most functions she's written so far much more convoluted. So far, the major hurdles seem to be that predicates don't unify (that = symbol shouldn't be overloaded with so many different meanings) and that she has to make sure to actually call the functions on arguments if she wants to use the results (she seems to tend to write scheme as we speak maths).

From my limited (heh) experience, FP doesn't seem to be that hard on average (maths proficiency-wise) university students. It seems to me that faculties might simply have to be careful not to admit students (in CS as in anything else) who would be better served in vocational schools.

As a programmer...

...with a PhD in Mathematics (the pure stuff, I'm used to abstraction and I've studied foundations) and who's been programming on and off for 25 years (though professionally for only half that time) in a wide variety of languages and who was writing recursive code since childhood I have to say that I found Haskell to be the hardest language to learn of all languages I have learnt (except maybe JCL but I'm not sure that counts). Some, but not all of this can be blamed on interference from other languages (Haskell classes really are not anything like C++ classes for example). If a 'foundation in mathematics' makes life easier for programmers wishing to learn Haskell I dread to think what it's like for people who are weaker at mathematics. I only kept at it because I think category theory is a nice subject (the elementary stuff anyway) and I saw learning about monads in the context of programming as something I couldn't miss out on.

Having said that - I now love Haskell, but hesitate to recommend it to people who aren't mathematically inclined.

except maybe JCL but I'm not

except maybe JCL but I'm not sure that counts

No, it doesn't ;-)

I wonder about what the statement "Haskell is hard to learn" really refers to. Learning the simple pure functional part, and using the REPL surely isn't that hard, is it? Is it monads, or are they just scary when you try to learn them via theory instead of by example?

It seems to me that where Haskell becomes hard is (a) when you want to create large systems. This rally isn't covered in most tutorials etc. and (b) reading "ponts free style" code.

some low-level hurdles

Here are a couple things I ran into while learning Haskell:

The function call syntax is unfamiliar. It takes a while to get used to how nested function calls work in Haskell. (Parens needs to be nested like Lisp rather than like C.) It's conceptually trivial, but it takes a while to sink in, and in the meantime it slows you down a lot. Even with practice, I still find f(x) to be a bit more comfortable than "f x". Mess with low-level syntax in this way, and people get cranky.

I also find that function currying and type inference make code very concise but much less readable. By contrast, Java's redundancy means that I can deduce most of an unfamiliar method's type signature from a method call. Assuming good method names, this allows me to read an unfamiliar method and tell quite a bit about what it does without looking up any of the methods it calls. In Haskell, I can't tell how many arguments a function takes or the types of expressions without looking up all the function definitions and learning the program bottom-up. In this way it's rather reminiscent of Forth, where to understand an expression, you had to keep track of how every method would modify the stack. (The Joy language is a pretty interesting exploration of the connection between functional programming and Forth.)

I don't know.

Maybe it isn't a foundation in mathematics. Maybe it's something else: an ability to forget what you already know? I don't know, but an understanding of mathematics surely helps (experience with abstractions, mathematical functions, and such). Maybe some people rely upon previous knowledge and experience more than other people when learning new concepts, and this makes learning functional languages harder (Wait, you can't do that in C, C++, Java, or C#. I don't get it.). Or maybe some people just don't realize that the model of a functional language isn't a register machine, but is the lambda calculus. Or maybe some people just don't care enough to put forth the effort to learn something new. (I'll never use this in industry, so why should I learn it?)

I’ll admit I’m a little disappointed if it’s the latter, but I wouldn’t be surprised if it is. I know some people who are really good coders (I’m not sure if I’m willing to call them good programmers, as that involves more than just coding up a solution to a problem), but don’t care to learn anything that isn’t “relevant” in industry.

On the case of Java Generics...

my guess is "no, we're not at that point". For the vast bulk of Java programmers, generics mean nothing more than that their co-workers will actually document what goes into their collections, and document it in a checkable way. Most Java programmers will never write a generic class, so it doesn't really matter all that much if they could. On the other hand, they will use generic classes developed by library vendors. That doesn't take any deep understanding, and does add serious value.

Re: client vs. creator

I like the idea of that separation, of it being a way to try to get added power without trying to force it upon everyone. Yet I think the wall between the two halfs can't be that solid; if somebody needs to use a library that makes use of obscure typing then they will either give up on it, or have to learn what is going on to get it to compile.

A straw-person pipe-dream 'solution'

First, one kind of solution would be for everybody to have a really good education in mathematics. I think that would be the a good solution in many ways because math would be useful to those people for many other things as well.

Having said that, however, I do think that math is hard (if only because it is perceived to be hard) and so that is not a realistic solution (to the nebulously specified problem of letting 'average' people do 'advanced' programming).

So that makes me wish for development systems that have a means of asking me what it is I'm trying to accomplish, and then generate the appropriate results. (I realize this is super futuristic and hand-wavy.) See, instead of dealing with co and contra variant types and self types and whatever else (per Eckel's thoughts in the links above), the system should interact with me in a natural language to get the desired result.

Yes, math would be better in many ways, and yes, natural language interaction is the stuff of movies not current reality, but might it still be worth thinking about that direction? (Huh, I often come back to things along the lines of what my old boss is doing.)

More fuel, it seems to me

In Eckel's on-going saga with Java generics, it is noted by various folks that:

  1. People want to do things, like slap two types together to get a 3rd one.
  2. There are many different 'solutions' to letting people specify what it is they want to do.
  3. It is hard to figure out all the resulting nuances of what you specified.
So I think that even if everybody did know math inside and out, we'd still have a problem. Humans shouldn't be required to forsee all the repercussions - that is exactly the kind of thing we ought to get our machines to do for us (although you can get into the arguments about how calculators are rotting kids' minds, which I sorta agree with). Part of interacting with the IDE/compiler via a native language should be that the system can tell me what the results are of my choices, to try to make sure I'm picking the right things.

(It just seems like, as much as computers and programming languages are really cool, and advancing over time, it is pretty sad that we are still stuck in the era of 'pre Star Trek' usability.)

Understanding what you're doing is fairly important

The problem is that trying to do 'advanced' things without actually understanding what you're doing is a recipe for shooting yourself in the foot. Without a grounding in 'advanced' or 'hard' topics such as mathematics you may not be able to articulate what it is you are trying to do, or, worse yet, may not even be aware that you can do something at all. I would hardly expect someone to be able to successfully develop a high-performance gas-turbine jet engine without understanding the mathematics underlying thermodynamics and fluid mechanics. Could you hack together some kind of jet engine? Sure. But if you want to make use of high-performance materials and complex inlet geometries - the 'advanced' stuff - you need a deeper understanding of what you're doing. I'm not sure why so many people seem to think that it should be possible to develop complex software using 'advanced' techniques without actually understanding what it is you're doing.

I agree

The problem is that trying to do 'advanced' things without actually understanding what you're doing is a recipe for shooting yourself in the foot. See, that's where I want the system to be helping - there isn't any reason it can't (in the limit) to a better job at understanding the repercussions than 99.9% of people.

Honestly, I'm not some math-hater. I totally agree that somebody or something needs to keep an eye on all the issues.

I'm starting off with a desire to make 'advanced' things more widely accessible. While you can make arguments for that being a specious goal, I think the way Java is going is some evidence that it isn't entirely dumb. Now, I don't expect that the relevant math etc. will become more widely understood, and I think machines could be leveraged to help take care of the 'hard' issues, believe it or not. That's what I'd like to see. It might not be this century, but keeping the thinking cap on isn't immoral.

More agreement?

So there are (at least) two benefits to understanding the math.

  1. Being able to see the bad ramifications: the confusion and complexity that comes with the available options.
  2. Being able to see the good ramifications: the gordian knot cutting power, the ability to clean things up and make them more robust.
I think that trying to beware dragons could get some help from systems. I think that trying to find better options could also be done with automated assistance. I'm not sure which would be easier - I suspect that looking for the bad results rather than the good would be more amenable to the kind of knowledge base things we can do these days? But how can we advance both causes?

Bad examples are not enough

Bad examples are not enough. You need to be able to understand why the bad consequences have occurred. Otherwise you end up in a trial-and-error mode where you keep plugging in new designs until you get something that works. For anything non-trivial the number of tweakable parameters is huge, so you need some understanding to help you prune the parameter space.

Granted, a lot of modern software "engineering" bears a disturbing resemblance to the classic "thousand monkeys producing Shakespeare". But I see no reason to condone that approach. Nor do I think it's a tenable approach as the complexity and/or subtlety of the concepts involved increases.

Search space

I agree with pretty much any issue you'd come up with. But nevertheless, I'd like it if folks could keep an open mind about what possible goodness could come out of more automated systems. Try to imagine what a good one would be like, rather than only what all the problems are with the bad ones. (The latter part is still important, of course, to make sure one doesn't go off and do something that is a complete waste of time.)

At any rate, I think this is obviously all going to boil down to some form of AI if automation is to become reality. The thing is to figure out if we can get away with our current AI abilities, of if we would have to wait for the Singularity. That's the kind of 'future systems' thinking I'd like to see from programming language development (in addition to the regular super math theory stuff we mostly get :-).

A "good" tool

Try to imagine what a good one would be like, rather than only what all the problems are with the bad ones.

IMHO a "good" tool would be one that can educate the user, or help to educate the user. But, since the going-in assumption of this thread was (or at least appeared to me to be) that the tool-users should not need to learn anything new, it seems that my "good" tools are excluded from the solution space. I'm all for tools that are the equivalent of interactive textbooks for 'advanced' concepts. I don't think it's reasonable to think we can develop tools that allow people to use 'advanced' concepts without actually understanding what those concepts are.


[Edit: it's not obvious to me how this is a language issue, so I'm going to try to avoid commenting further on this thread. The closest relationship I can see to languages is the idea of having some form of REPL that provides running tutorial advice. In that case, we might talk about language features that support such a tutorial approach. I have no idea what those might be though.]

Re: Is this a language issue

Yes, right, the languages might well be our regular ones, and then this would all be implemented by tools, tools, tools on top that would generate Haskell or C or whatever. So maybe it is all super duper irrelevant to LtU.

Automation will not solve the problem

See, that's where I want the system to be helping - there isn't any reason it can't (in the limit) to a better job at understanding the repercussions than 99.9% of people.

I don't dispute that a certain amount of automation (in the sense that the system does the hard bits for you) can help. But mostly what it's good for is complex book-keeping. If you don't understand what you're doing the system can produce results that are just plain wrong, and you (a) may not realize it, and (b) won't know how to fix it. I've seen this happen a lot in various engineering disciplines where CAD tools have become popular. The tools are wonderful for performing complex computations that would take forever to do by hand. But, if the user has no underlying understanding of the theory of (finite element structural analysis | control system design | thermal analysis | orbit mechanics) the results often end up being wildly wrong. Not that the computations are incorrect. But, by the GIGO principle, the results are useless in the real world because the user didn't understand what to feed into the tool, or didn't understand how to correctly interpret the results they got.

I'm all for making 'advanced' stuff more accessible. I just happen to believe that the right way to go about that is through education rather than automation. The math in question is a way of dealing with subtlety. Without precision of expression, subtlety gets lost. And the subtleties are usually what makes the 'advanced' different and important.

Re: GIGO

Yes, that is very true. I recently attended a biotech conference where one of the invited speakers railed against folks who don't know how to analyze their FE system and prove that it is within various error bounds, yet still publish things. That drives me nuts.

But! There should always be a but. The question is then surely how can we get people to suck less, and one possibility is to have the tools be something to lean on. I completely agree that can be fraught with ultimate peril, but I don't want that to be a reason we avoid trying to develop ever more helpful systems.

Put it this way (and I apologize for the ongoing Star Trek examples): if we had a holodeck where I could get Einstein and Milner to work with me on whatever cool bio/compu idea I have, wouldn't that be neat? OK, well, I think it would be neat, and that as a whole we might end up with a more productive humanity (for some value of productive). I'm not guaranteeing it will be a more educated one, however.

Such extremes are in and of themselves irrelevant to LtU but maybe thinking about those rough directions for the future would result in something new and useful?

Tools are an aid, not a substitute for education

But! There should always be a but. The question is then surely how can we get people to suck less, and one possibility is to have the tools be something to lean on. I completely agree that can be fraught with ultimate peril, but I don't want that to be a reason we avoid trying to develop ever more helpful systems.

I am by no means claiming thet we shouldn't try to develop "ever more helpful systems". CAD and other analysis tools have been enormously helpful in the "other" engineering disciplines. It's more a question of what kind of tools can feasibly be developed. My assertion is that (based on my experience in those other fields) trying to hide the 'hard' stuff from the user will not be successful unless the user understands what they're doing. In other words, I don't think it's possible to develop a tool which will successfully permit a user to be ignorant. Which doesn't mean that we shouldn't develop tools. We just shouldn't expect them to be particularly useful to users who refuse to learn the theory behind the tools.
...if we had a holodeck where I could get Einstein and Milner to work with me on whatever cool bio/compu idea I have, wouldn't that be neat? OK, well, I think it would be neat, and that as a whole we might end up with a more productive humanity
Would we really? Without appropriate background, the user of your holodeck wouldn't be able to understand what Einstein and Milner were saying to them when they explained why some particular idea wouldn't work. Now, I suppose that the Einstein and Milner simulacra could attempt to break things down into layman's terms, but then you tend to lose the subtleties of what's going on (which is why we have technical terms in the first place), so it's not obvious (to me at least) that the explanation would be all that helpful in anything but the trial-and-error sense I mentioned in one of my other posts ("I want to do X!", "That won't work.", "Well then, I want to do Y!", "That won't work either", "Uh... I don't know what I want to do", "Yes, that much was obvious" - this is of course a fundamental problem in any engineering project being carried out for a lay-customer as well, but customers [at least those that eventually get the product they need] are at least open to education). Alternatively, the Einstein and Milner simulacra might attempt to give the user sufficient background to understand their answers. But that constitutes "education", and I was under the impression that the holodeck tool was supposed to cater to willfully ignorant users.

Yes, that much was obvious

I wouldn't be surprised if you were 100% correct in the end. :-)

Functional languages aren't hard; laziness is hard

I think it's the (to me) inexplicable popularity of lazy functional languages in the FPL community that makes them seem hard. Strict functional languages aren't really hard at all, and the barriers to their wider adoption are IMHO primarily matters of syntax, which could be easily overcome.

John (he fears the monad) Cowan

Re: strict, lazy, monads

I am reading into your note that you associate monads with lazy functional languages, and that strict ones don't need them. I don't think that's the case so I'm guessing that I'm misunderstanding - pretty please elaborate on your thoughts along these lines?

I think you could find use fo

I think you could find use for monads in every language there you can implement them, but that you don't need them in a strict language the way you do in a unstrict languages. Haskell has to have monads to make sure its IO happen in the right order. In a strict language the order of evaluation is already defined, so you don't need monads to do it.

The monads are there implicitly

Monads were used to describe the semantics of programming languages because they simplified the description of effects. Haskell makes the monads explicit in the IO interface, but you could describe ML or Clean (and probably C and Java) in terms of monads as well.

You could just as easily...

say that the Turing machine instructions are there implicitly. It's true, in a sense, but utterly useless.
I would venture to say that essentially no one actually thinks ML or Clean in terms of monads, and I'm absolutely positive that no one thinks of C or Java in those terms. In a discussion about language usability, that kind of matters.

Circularity

Monads were used to describe the semantics of programming languages because they simplified the description of effects.

...in a pure, non-strict metalanguage, that is.

I.e. the reason Haskell relies so heavily on monads is the same reason that monads are useful for semantic descriptions, i.e. both Haskell and typical semantic metalanguages are pure & non-strict.

I thought there were no side-

I thought there were no side-effects in Clean to describe with monads. Rather an IO-monad could be built/described by using uniqueness/linearity (as opposed to the other way around).
(Though, maybe one could use some monad to describe an intermediate language in compilation of Clean, perhaps such things as caching of lazy thunks ..)

"needing" monads

Your implication is that the IO monad in Haskell is all about sequencing, and that the distinction therefore comes down to strict/non-strict. I don't think that's quite right. I think purity has as much to do with it. Let's take a look at ML, for instance.

Suppose we wanted to design an ML with purely functional IO. Let's try to adopt the Haskell approach and see what happens...

Instead of "functions" that perform IO in an impure fashion, we'll introduce a set of pure constructor functions that build IO actions, put_string, read_string, etc... So far, so good...

But now we need a way to actually perform these actions, so let's introduce a "magic" value called main, of type "io list", representing a list of IO actions to perform. The program (purely) computes this list of actions, and the language runtime simply performs each action.

Sounds OK, but now we have two pretty serious problems. First, the program has to deliver the entire list of actions "all at once" to the runtime. This is clearly impractical for any interactive program. Laziness is one answer here, but there are others. With a little effort, we could model main as a stream instead. It's not completely trivial, but let's just assume we can make that work.

The second problem is more severe: all of the IO actions have to be mutually independent! There's no way that we can deliver a value from one action to the rest of the program. Even though the runtime guarantees to execute the actions in order (effectively threading "the state of the world" through the program in a linear way), our program can't exploit this fact. So we still can't really build an interactive program!

Now, if we want to retain purity, we have a couple of options... We could just bite the bullet at this point and introduce an IO monad to abstract and enforce this implicit state-passing, giving the program access to the results of performing IO actions.

But, if you're still not convinced, let's consider the other option. We can forget about the IO actions and all that and extend all the old constructor functions to accept and return a "world state" value (thereby converting them into functions that actually perform the IO action), but now we need to extend our type system with linear types to prevent programmers from (inadvertantly or maliciously) reusing world states. So we're already looking at a lot of work... Plus, our programmer faces an exceedingly painful discipline of threading this state through the entire program. What's the programmer to do? "Gee," our thoughtful programmer thinks, "all this single-threaded world-state passing looks suspiciously monadic..." and suddenly we're right back where we started... We'll introduce an IO monad to pass the hidden argument to all these IO functions. The monad does the hard work of meeting the linearity discipline, our type checker is happy, and finally we can do some real purely functional IO without tying ourselves in knots.

So, in short, I think purely functional IO, strict or not, really wants a monad. The historical fact is that Haskell's laziness pushed sequencing to the foreground, and maybe that helped them discover the usefulness of an IO monad (I'm not sure about that, just a guess...), but at the end of the day, sequencing is only part of the story.

(I hope this post makes sense and doesn't contain any horrible mistakes. I don't usually go this far out on the pedagogical limb, and I feel like I'm deeply underqualified. I really hope that our more clever friends will point out my errors before I confuse anybody!)

purity and monads

Actually, I don't consider monadic IO to be pure, so thats why I din't bring it up. Yes the threading of worldstate and linear typing gives you pure IO, but then you give it up again by hiding it in a monad.

A monad is a way to implement side effekts in a pure language. Just because they have a pure implementation doesn't mean there usage is pure. Just like a implementation of pascal in clean wouldn't be pure, or that it's not impossible to implement pure languages on our unpure machines.

Its the interface of the abstractions that count, not the implementations.

Monads does imply a effekt system that contains and limits the side effekts the monad adds though and thats realy good. But I think thats more of a requisite than a consequence. I might be wrong on that one though.

interesting

This is an interesting comment, because I've always thought of monadic IO in rather the other way... In fact, I might've said, "Just because they have an impure implementation doesn't mean their usage is impure."

I think I understand what you mean, though, and I think both points of view have some validity even though they're in one sense opposite, like a Necker cube.

As an aside, it's also interesting that you distinguish between explicit state passing and using a monad. Suppose we had an explicit state passing system with enforced linearity. Now suppose we create an IO monad, not built into the language, and gave it a function runIO with type "IO a -> World -> (a, World)", where of course World is the linear state argument. So now we have a language where we can intermix styles freely, as long as we have access to the suitable value of World. Do you call this pure or impure? If I understand you correctly, your view is that using the monad is akin to embedding a miniature interpreter for an imperative language. So some code might be pure, and some impure (even though both do IO). But the question remains: is the base language pure? Why should it matter whether the IO monad is primitive? Maybe we're just nearing a fuzzy boundary here. Not that this is bad: interesting things sometimes happen in fuzzy boundaries.

In any case, we seem to agree that if you want to preserve a purely functional core language, monadic IO is probably the most elegant way we currently have to do that... Although perhaps you don't think that at all, and I'm just being presumptuous.

Yes I can see that view to. I

Yes I can see that view to. If you implement a function in haskel using a state monad, the function vill still be pure on the outside, even though its unpure on the inside. Its realy alot of intertwined layers of purity and impurity.

I say the base language is pure, but that it doesn't realy matter.

Yes, I could agree to that. (the first statment, not you being presumptuous.)

Generics are fundamentally hard.

Even if you sweetened the terrible C++ templating syntax so it tasted like honey fudge.....

...getting the semantics of generic types correct remains irreducibly hard and is actually deep in the realm of some pretty gnarly mathematics.

And when you mix multiple inheritance, and exceptions and threads and generics.....

....it is very very very hard.

Yes, it is well within the realms of Joe Coder to write code that does all this.

But it is way out of his realm to do it correctly in all the noxious little corner cases.

So it's all a case of "Whee! I'm coding generics / threads / exceptions / multiple inheritance / meta-programs!!!" and "Woe! There is an infinitely deep pool of nasty little sporadic corner case bugs."

And no, I'm no super programmer that can do this. I'm just bright enough to realize the pool of bugs has no bottom when mortals code this stuff.

Not necessarily

I don't think that "generics" are intrinsically hard. In functional languages for instance they are completely straightforward most of the times. They seem to be much harder in OO languages, but I cannot help thinking that this tells more about OO (and its commitment to subtyping and complex type recursion) than about generics themselves.

Answer to one question

"So my question is, at what point have you made a language that regular folks simply won't be willing to learn?"

At the very beginning. For any "feature set". Period.

"Regular folks" (i.e. professional "in the trenches" programmers) aren't going to learn anything if it does not clearly and fairly immediately lead to benefit in what they are doing (quite rationally*). On the other hand, when these things become clearly relevant they will be learned no matter how "hard". Now that Java has generics, they will be learned and used.

This means if you want to appeal to "regular folks" your language should be incremental additions to mainstream languages or, better, contain absolutely no non-mainstream/new ideas and simply do things in a cleaner way. The goal is to make the most benefit of what typical training/education provides; parametric polymorphism is not typically taught nor comes up in on the job experience for most ergo it, as a feature of a language, will have to be "learned" and will be seen as "difficult".

* I find/found the whole "languages for smart people" v. "languages for the masses" idea arrogant, elitist, and, worst of all, completely irrelevant. The "masses" are (a) not "stupid" as the names would suggest, (b) are behaving rationally given their knowledge/perspective/value system in not learning many of the "smart people" languages, and finally (c) do find many of the ideas "hard", but, in my opinion, this is mostly due to what they were taught/learned on the job (or rather not taught/learned); certainly something like "recursion", even for the sake of argument considering it inherently more complex than iteration, is still well below the threshold of what would be expected to be known by a professional.

Languages for rednecks

I find/found the whole "languages for smart people" v. "languages for the masses" idea arrogant, elitist, and, worst of all, completely irrelevant.

In that case, I'm guessing you won't like my new Redneck Scheme dialect, in which call-with-current-continuation is renamed y'all-come-back-soon-now-y'hear?...

Aside re: regular folks

What led you to thinking about people being "smart" vs. "stupid"?

As an aside, I see myself more as 'regular folk' since a lot of PLT is, to be honest, over my head. What interests me is the goal of having software systems that are more robust, more fun to develop, more flexible to extend, etc. and I think theory can lead us to tools which make such dreams more realistic (avoid shared state in concurrency, avoid mutable state in general, use types, use GADTs, etc.). Too bad I'm really just watching and waiting on the sidelines.

I will admit to one particular metric by which I evaluate other folks: it is how much they want things (government, languages, automobiles; anything) to be better. When I work on projects which are still using C and mutexes, I think that is very sad. The culture is not amenable to trying significantly 'better' things. So if the Average Joe programmer isn't actively thinking "gee, there has got to be a better way to do this" then I find that to be the missing thing.

Re the "smart"/"stupid" I bel

Re the "smart"/"stupid" I believe the "languages for smart people" ... meme I guess is the best word started here. And has been discussed on ll1 and LtU. As I said, the very name suggests that "the masses" are "stupid". In my opinion, the much more accurate name for "languages for smart people" is the admittedly less pithy (though perhaps that could be fixed) "languages for people who like programming for programming's sake". Explicitly answering the question of "what led me to ...". Presumably, almost by definition, the "languages for smart people" are the ones/contain features that people are not willing to learn. That, and I justed wanted to rail a bit on that topic and will stop here for that particular reason.

Re to your second paragraph and making explicit an aspect of my first reply: ignoring all the other things that make "subpar" solutions not necessarily subpar (industry support and such). It is, in my opinion, completely rational that many of the "regular folks" are ignorant of/incompetent with many "better" ideas. For example, if the people who programmed that project using C and mutexes were only aware of/proficient in those techniques then they may well have been using the best techniques; that they knew of anyhow. The "Average Joe programmer" can be thinking "gee there's got to be a better way" and still end up using less than the best techniques. They need some mechanism to judge that the effort they put out learning something will ultimately be paid back. You can't know intrinsically if something is worth learning until after you learn it. There are, however, cheaper ways people (in general) make such decisions, examples: following the majority, following authority, advice from a respected colleague.

Summing up: in my opinion, approaching the problem from a language perspective, especially one of making "easy" languages, is wrong (at least for PLs targetted at programmers). No basic feature is so intrinsically difficult as to be "too difficult" to learn. All of them tend to be easier than say being a competent OpenGL programmer (or stick in many other library names), and certainly are less difficult than learning programming in the first place. The problem is almost entirely one of motivation. Why should one learn some feature or language or even syntax? Just because there is a small community that says it's the best thing since sliced bread? while meanwhile most everyone else ignores it? Hence the first sentence of my first reply: motivation to learn a language is not a language feature.

Re: sliced bread

Just because there is a small community that says it's the best thing since sliced bread? while meanwhile most everyone else ignores it?

You are quite right. My personal experience is that the "sliced bread" tends to be only that, and there no butter, no knife, no plate. Anything to do with computers is complicated and is most likely to not work unless there has been a large community behind it for several years. I've installed O'Caml 3 times, twice the install was a failure, and the other time there were blatant bugs in the tk/gl libraries. Several languages don't even have a debugger. Etc.

So another aspect of it all is: who has the luxury or time or ability to withstand pain of the "sliced bread" solutions? If you are doing your PhD thesis with Haskell that is possibly quite different than needing to ship a million household Digital Video Recorder systems. (Personally, I think there should be room to move towards 'esoteric' things like Erlang or BitC or whatever.) But that is getting further OT for both this thread and for LtU in general :-}

"Languages for thought leaders"

In my opinion, the much more accurate name for "languages for smart people" is the admittedly less pithy (though perhaps that could be fixed) "languages for people who like programming for programming's sake".

That doesn't capture the distinction in question. The main claim is really that it's possible to get more done with less effort, using more powerful or more appropriate tools, in this case languages. And that claim is clearly true in some cases, although exactly which cases is often disputed.

It's also quite reasonable to characterize someone who exploits a powerful tool to make his job easier as "smart".

This is really orthogonal to the question of the rationality of the choice that programmers make. The "masses" may be perfectly rational in following a particular crowd, but that doesn't preclude some people from being "smart" and making effective use of particularly powerful languages in ways that Joe Average either hasn't thought of, or isn't capable of.

Some languages do seem better-suited to this kind of use, but I doubt there's a way to satisfactorily capture that in any short phrase. Of course, a modern marketing department might come up with something like my subject line.

Elitist recursionist

Why isn't it elitist to state that a professional should know recursion? I've been told that many programmers get by without it.

Getting by is overrated

Many programmers get by without a lot of things such as parser generators, garbage collection, generics, etc. Sometimes recursion is the right tool, and not knowing about it deprives them of that option.

Programmers can get by with less.

In fact, a programmer can get by only knowing global variables, if (no need for a then), and goto (sounds kind of like the original FORTRAN, eh?). Is it then elitist to state that a professional programmer should understand procedural abstraction and looping constructs? Many (early FORTRAN) programmers got by without them just fine. Remember, a programming language doesn't supply these and other constructs because you can do new things (in the end it's all assembly language anyway, which, in general, doesn't have much more than conditionals and goto as far as control structures go). Programming languages supply these constructs because you can think in new ways. There are some cases where it is either very difficult, or impossible to come up with a (purely) iterative solution to a problem. (Remember you can simulate recursion with iteration and a stack, so you don't need recursion.) Even developing an iterative solution that uses a stack is much more difficult than developing the recursive solution. This is similar to how it is possible to never use procedures in your program. You can just use gotos, but the procedural abstraction allows you to think about the problem in ways that were impossible before, and hence certain problems will be much easier to solve once you understand the concept of procedural abstraction.

This argument extends to things like higher order functions, Lisp macros, and the like. No, you don't need higher order functions anymore than you need the ability to abstract code into a procedure. However, higher order functions allow you to think about your problem in a different way. In certain cases this allows you to find a solution to your problem that would have been much more difficult (if not impossible) to discover without such an abstraction. Why is it elitist to argue that it is better to learn multiple methods of abstraction? In many cases functional abstraction (along with recursion and the like) allows you to think in terms of the problem, rather than in terms of the machine.

Something that annoys me to no end is when programmers insist that they don't need this construct or that construct, because they have gotten along fine without it. I've seen it over and over again. "I can do that in X too! See." Then they proceed to show a complex and verbose code listing demonstrating how they don't "need" higher order functions and lambda expressions. That the lambda and higher order functions made the functional solution easier to reason about and develop is ignored, and of course the result is that the person trying to demonstrate the benefits of functional programming is called an elitist or some such.

My intent with that remark (t

My intent with that remark (though I admit it's quite difficult to tell) was more: if recursion was a "critical" aspect of the majority of mainstream languages, then it would be learned and there would be few who would consider it difficult. That said, in this case, most programmers are aware of recursion (God I hope so!) though usually limit only to the stereotyped cases; putting "parametric polymorphism" aka generics, or "higher order functions", or many other things would also work. Another interesting one is sticking in "object orientation" and comparing the environment today with the one, say, 15 or 20 years ago. The point being these things are "difficult" more as a function of context, and that these things, once aspects of mainstream knowledge, are/will be at a basic skill level like knowing how to saw wood in carpentry.

Thanks for the clarification

The point being these things are "difficult" more as a function of context, and that these things, once aspects of mainstream knowledge, are/will be at a basic skill level like knowing how to saw wood in carpentry.

I agree that difficult is often different in disguise. That's why programmers should be exposed to different things during their education, so that they can apply them without difficulty in their professional life.

"That's why programmers should be exposed to different things during their education [...]"

Operative word, unfortunately, is "should".

On Complexity

There was a programming book on ATL that was self-congratulatory on how clever ATL programmers had to be to understand the difference in nuances like multithreaded apartments and single threaded apartments, and how programs crash and burn if you forget to implement IMarshallable because STA objects can't be invoked directly by another thread. The passage remarked "That's why we are paid so much money".

Where is ATL today in day-to-day business apps? I'd guess very little, lost to C# and Java.

I think there is an inherent danger in complexity: It just begs to be replaced with a simpler construct... one which the computer does the hard work. Garbage collection is an example of this. Spending time learning these complex constructs when they are going to be obsolete by the time one masters it is an exercise in futility.

Re: futile calisthenics

Spending time learning these complex constructs when they are going to be obsolete by the time one masters it is an exercise in futility.

I concurr. Instead, I'd love to see people spending that time on making those thing which will obsolete the hurdles we whip ourselves with daily, to piss metaphors into the wind. It seems like all the whacky possibilities of generics should be amenable to some simplification from the perspective of the developer, if not from the perspective of the underlying code that gets executed.

(But admittedly I am lame and probably wouldn't spend the effort to grok it all and make some demo simplification tool. Perhaps once Mr. Eckels is done, I'll read what he's boiled down and then see if there's something even more pithy and concise that could be written.)