## Why only 'minimal' languages

Why is there still the trend to create languages where most functionality is implemented in some standard-libraries und not in the language itself?

Quite often lots of time and effort are invested to make a language as extensible as possible and to create every feature (even standard datatypes, like arrays, lists etc) in the library and not as part of the languages itself. I think that's really antiquated thinking from the early years of CS where the usual usage patterns of programming languages where not so well known and an extensible language seemed much more powerful. But this has changed a lot: Today most usage patterns are commonly known and while there are lots of them, the total number of those patterns seems quite manageable.

Often it's unneccessary difficult to use those 'library implemented' features. Think of using lists in a language without native list-support (like Java, C++ etc) and compare that to a language with native list-support. It's so much easier and much more readable in the latter.

But why stop with those 'low-level' features? Why not try to build a language with ALL common used features directly in the language: From strings to maps to general graphs. From loops to threading to database-access. From visitor to observer to MVC. A language without much of a 'standard-lib' because it's mostly integrated in the language itself. Instead of inventing Systesm to create DSL why not simply create and integrate all those 'DSLs' in a single language?

Sure, such a language would be less 'cute' than a minimal one. But I suspect we could gain lots of productivity, safeness and performance.

## Comment viewing options

### Axiomatic Provability and Replacable Subcomponents

The smaller the initial language, the less code for it's implementation and the easier it is to 'prove' that it is correct. This takes us more toward the mathematical idea of Axiomatic reasoning, and also makes real world implementation more possible.

You're writing a language that is hopefully 'better' than what is already available, so why not show that you can write all of these common patterns in the new language itself. That serves a doulbe purpose of showing that standard patterns can be handled in the new language and thus it should be able to handle any other non-standard pattern that you can throw at it. Implementing such features in the development language gives greater efficiency, but shows little of what you can do with your new toy; putting everything in libraries does.

Putting other things in libraries also allows for them to be replaced easily (hopefully) or at least overridden during the development process if on finds something that is more suited to your needs. A large static language is just that, and although such a language might suit the needs of it's initial developer, you can be sure that others won't find it so suitable.

This question is a bit like how old mainframe opereating systems were designed vs. unix. The big iron systems were huge and integrated and as monolithic as they came, including kernel, drivers, shells and applications (from what I know, i wasn't there). Unix was small and disconnected, pieces could be replaced and the flexibility of the system won out over the staticness of big iron.

But the idea of where to draw the line for langauge vs. library is never really known; libraries show the power and decoupling of the la nguage, putting features in makes it static and at risk for being called bloated. There's always an opinion and usually the most correct one is somewhere on both sides of the line.

--
kruhft

### I wouldn't say this is still a trend

To me, the core language with little in it is an old concept, certainly not one that's been the driving force behind language design over the last 20 years. Some counterexamples I can think of:

* Q (the latest revision of K) has database access built-in.
* Iverson's J has vectors and matrices and complex numbers built-in.
* Felix has regular expressions built-in.
* Perl has regular expressions and file I/O built-in (there's an operator for "read line from file").
* REBOL has support for dates/times, html/xml tags, and filenames built-in (filenames start with the percent symbol).

### But why stop at those

But why stop at those features? And why not combine them all in one language?

Why not have database access, vectors, matrices and complex numbers, regular expressions, file I/O, xml, dates, times and lots more, like all commonly used ADTs, support for all common usage patterns, gui programming, server programming, scanner/parser generators and lots more build into the language instead implementing them via libs or external code-generators?

### Why?

That seems to be the approach taken by Perl. ;-)

What does it gain you? Easy syntax? Code-generators can get that for you. Interoperability with other language features? Libraries can do that, and then you don't need to carefully consider the interaction between every pair of features.

I'm trying to imagine what a language like this would look like, and what I come up with isn't pretty. I like being able to load a database library and think in terms of "okay, to query, I call this function" or load a GUI and think "to display a button, create a Button object". It means I don't have to memorize new syntax for every single new task. If they were language features, I'd have to memorize a whole lot more irrelevant details to get my job done.

### The problem of Perl is it's

The problem of Perl is it's horrible syntax not to many features. Perl don't offer more then for example Ruby, but the latter has a much more readable syntax.

If you have to use something, you have to remember something. Thats always true, if you use a lib - or a new syntax. But a syntax can made more expressive (if the language designer has done his job well).

For example the gui-example: If the gui is implemented via a lib, you need to know lots about the internal workings of the lib. But if you put it into the syntax of the language, the compiler can infer lots of things you don't need to specify explicitly anymore. In the case of a gui, the compiler creates call to some runtime-library which can then replaced to fit it to different underlying systems. But for the user of the language this would be totally transparent.

Some other big problem with lib ist, that they are easily replaceable. Many see that as an advantage, but is it really? Why use different gui-libs or even ADT-libs? It only reduces maintainability and readability of the code because everybody can use another lib, even for standard-stuff.

### Syntactic Sugar

The problem of Perl is it's horrible syntax not to many features.

This is exactly my thinking and why I raised the idea of our testing alternate surface structures with identical semantics.

If replacing Perl's syntax with something better could reduce errors and coding time, you would have a solid case for the value of putting more design effort into this sweet (pun intended) topic.

### Most compilers are

Most compilers are implemented via chains of tree-rewriters which transform the original source in multiple steps to executable (or VM) code. In a 'fat' language, most of the 'fat' language features will be processed at the first stages of the compilation process (similar to syntactic sugar). If the later stages of the compiler use a solid model you can prove losts of interesting stuff on those later levels without increasing the complexity compared to a simpler language.

Sure, the creation of usefull libraries and compilers in a language is a common way to check how usefull the language will be in the future. But you can only see how usefull the language is in the domain of library programming. And it's not neccessary true, that a language which is good in programming libs and ADTs is also good for common application programming tasks - but the latter is a much more common task in every (productive) language and the real goal someone should tackle if he designs a new programming language.

Today we know very much of the usage patterns of programming languages. There are simply no big surprises anymore. In the earlier days that was different and because to this, 'minimal languages' which implement as much as possible in extensible libraries, were a good idea. But is this still true? I doubt that.

And I don't think that the Unix/Mainframe comparison is really applicable here. Even a 'fat' language would be able to create new things by combining existing features. I simply suspect that those 'new things' are not very common in the live of most programmers and a language should support them by making those common things as usable and as safe as possible.

Think of the huge amount of mistakes and lost opportunities for clever automatic optimisation by putting SQL-statements into strings (like lots of languages do). If the language whould have a similar feature directly integrated the compiler could do lots of checking and optimisation - and the programm would be propably much more readable.

Or look at the success of those 'new' 'dynamic' languages like Ruby and Python. I suspect that the primary reason why many people like them is not the dynamism, but simply the fact that they have more well build in features like lists, maps etc compared to a language like Java or C++. A huge percentage of all those "look how easy it's in Ruby/Python"-examples are simply demonstrations of the expressiveness of those buildins.

I think 'bloat' is a fear lots of language designers have. But why is 'bloat' in a language so problematic and 'bloat' in the standard-libs not? Every programmer needs a certain amount of 'bloat' if he don't want to reinvent the wheel oder and over again. So it's simply unavoidable, why not try to make it as usable as possible?

### Bloat in the standard libs

Bloat in the standard libs will at most swallow up namespace - and with a reasonable system to start off with, that's a fairly minor problem.

Language bloat starts to swallow up possible design space - existing features rule out new ones, whether by breaking invariants the new feature requires kept or just by hogging all the sensible bracket characters.

### So you think it's

So you think it's impossible?

Or is it simply more difficult to do compared to designing minimal language?

Many features are easily reusable: A standard looping construct is reusable for all kinds of iterations, build in types can often be discriminated via access patterns or hints (i.e. if a compiler uses a tree-based map or a hash-based map).

Sure, it's more difficult. But who said that language design has to be simple?

### It's exponentially more

It's exponentially more difficult once the features have any kind of side-effects that can interact. And if you've got a pure language to start with you may as well just go the libraries+sugar route, or look at making the sugar definable.

But won't this mean that all the effort in the area of DSLs is totally wasted? And what about Lisp which seems to be able to integrate lots of different features and be also extendable? Why shouldn't a carefully designed language be able to accomplish the same, maybe with a bit more 'mainstream-friendly' syntax?

A 'fat' language would have lots of features, but not as much that it would be impossible to create. Of course it needs a totally different approach to design as the more common minimal languages.

### Sufficiently good purist

Sufficiently good purist languages make great substrates for DSLs, no? I'm not arguing against purist DSLs, either. Lisp effectively goes the library route, it's not easy to tell a built-in special form from a macro and some externally-implemented code.

### Lisp shows that it's

Lisp shows that it's possible to create a language with the potential to be really 'fat'. Simply imagine a well designed, huge macro-based standard-lib.

The problem with Lisp is simply that everybody can create it's own extensions and that there is no common ground anymore. Thats not a problem for individuals or small teams, but for an 'industrial strength' language you need something more fixed.

### Standard Libraries

The problem with Lisp is simply that everybody can create it's own extensions and that there is no common ground anymore. Thats not a problem for individuals or small teams, but for an 'industrial strength' language you need something more fixed.

I think you have some misconceptions about the nature of Lisp. Lisp allows us to put our syntactic extensions in an ordinary code library, so your statement simply amounts to, "An 'industrial strength' language requires a set of standard libraries."

I'm surprised that you mention Lisp. I would have thought that Lisp macros would give you exactly what you are asking for. (If you don't mind parentheses!) They allow us to write libraries containing both syntax and semantics.

Indeed, Paul Graham argues that macros give Lisp much of its power. He writes that the collection of functions and macros used to solve a problem form, in effect, a Domain Specific Language tailored to that particular problem domain. He has referred to this as "Language Oriented Programming".

When you call Lisp "fat", I think you are referring to something like Common Lisp, but this is an anomoly. Common Lisp could, in principle, easily shed a few pounds by shifting some code from the standard prelude into libraries. Check out Scheme to see what is possible in a lightweight dialect.

I should also point out that not all languages are as syntactically rigid as the C-style family. For instance, in Haskell, we can create new infix operators and specify their associativity and precedence. Haskell's lazy, pure-functional semantics also eliminate much of the need for new syntax; for instance, consider the following, which would be impossible in a language with eager-evalutaion:

if_then_else True  consequent alternative = consequent
if_then_else False consequent alternative = alternative

### Its true, that Lisp can do

Its true, that Lisp can do all of this - in principle. In practice its different, simply because there is not such thing a a big comprehensive library. Creating something like this in Lisp would be quite similar to create a new language itself.

And the problem with easy extensible language remains: If everybody can extend it, some will and in the result you have more complexity then in a fat language where all features are carefully designed as a whole in the beginning.

And its really important to have 'common grounds'. The more, the better. Each user-definiable thing has to be understood if you have to maintain the code, someone else wrote (or even your own code after some time). Also with 'common ground' code reuse is simpler, because parts will simply fit better, if they are created on the same foundation. The more you 'fix' a language and its frequently used features, the more of those 'common grounds' you will get.

### The IEUC Approach

At the IEUC, we are thinking in terms of a set of standard dialects to capture the preferred nomenclature of key discourse communities and then map those to a set of standard computational models along the lines presented by Van Roy and Haridi.

When a new dialect is created you could use a formal Description Logic to classify it and automatically handle most of the implementation if its semantics are subsumed by an existing implementation. (i.e. we can decompose a dialect into a choice of computational model which would in most cases - outside the realm of programming language design itself - already exist as part of the standard, a set of support functions which could be written in any previously defined dialect, and a parsing expression grammar to supply the novel syntax).

The problem with most languages that take the library approach is that there is no structure to the set of libraries that would help an End User leverage prior knowledge to reason about them, so each library is more akin to learning a new DSL from scratch rather than reasoning by analogy (eg. this DSL is just like the 2-d subset of the matrix library without statistical operation, but in this domain we call matrices 'zoidbergs' and add a new 'bender' function to automatically generate hyperlinks). It would be a higher level approach than just saying DSL 2 imports DSL 1 as part of its implementation or that it is just a big ball of macros expanding to DSL 1 source.

### I should also point out

I should also point out that not all languages are as syntactically rigid as the C-style family. For instance, in Haskell, we can create new infix operators and specify their associativity and precedence. Haskell's lazy, pure-functional semantics also eliminate much of the need for new syntax; for instance, consider the following, which would be impossible in a language with eager-evalutaion:

    if_then_else True  consequent alternative = consequent
if_then_else False consequent alternative = alternative


*ahem* Strict evaluation doesn't preclude manual laziness; see: closures.

### True

That is true, however his point still stands. In order to delay binding using closures, you have to create the closure at the point of calling the function (assuming you don't have something like macros, which could be considered a form of lazyness, since they don't evaluate their arguments). You couldn't just do something like this (Scheme code, and again, if-then-else is a procedure, not a macro, because macros introduce a type of lazyness outside of what closures offer), because Scheme will evaluate the arguments before passing them to the if-then-else expression.

(if-then-else condition
(do-consequent)
(do-alternative))


### That is true, however his

That is true, however his point still stands. In order to delay binding using closures, you have to create the closure at the point of calling the function

That would be the difference between lazy and eager, yes. In any event, I wasn't trying to refute his point, just saying that you can do it in an eager language, too.

because macros introduce a type of lazyness outside of what closures offer

Macros offer a simplified syntax for manipulating code; nothing more, nothing less. Manipulation (generally) is outside the scope of closures, although such cases are generally rather rare and exotic.

### I think you've missed the point.

Letâ€™s step back for a moment and reread Peter's comment:

Haskell's lazy, pure-functional semantics also eliminate much of the need for new syntax; for instance, consider the following, which would be impossible in a language with eager-evalutaion:

to which you replied:

Strict evaluation doesn't preclude manual laziness; see: closures.

This is true, but misses the point, which is eliminating much of the need for new syntax. Closures by themselves don't give us the same flexibility that lazy evaluation in a language like Haskell gives us because eager languages are eager. To gain the same ability for syntactic extensions that we automatically get with a lazy language like Haskell, we need some way to prevent evaluation of arguments, but closures donâ€™t give us exactly this, because they force us to wrap the arguments in closures before we pass them, which gives a distinctly different syntax than language primitives.

Macros offer a simplified syntax for manipulating code; nothing more, nothing less.

What is a macro (in a language like Lisp)? The only difference between (simple, non-hygienic) macros and procedures in Lisp like languages is that a macro doesnâ€™t evaluate itâ€™s arguments but rather passes the unevaluated s-expressions to the macro so they can be sliced, diced, chopped up, manipulated, and transformed into something else. This is, in a sense, a form (or type) of lazy evaluation. Okay, I lied; macros arenâ€™t really a type of lazy evaluation. But, itâ€™s that property (arguments are not eagerly evaluated) lazy evaluation in Haskell shares with macros that eliminates much of the need for new syntax in Haskell.

### This is true, but misses

This is true, but misses the point, which is eliminating much of the need for new syntax. Closures by themselves don't give us the same flexibility that lazy evaluation in a language like Haskell gives us because eager languages are eager. To gain the same ability for syntactic extensions that we automatically get with a lazy language like Haskell, we need some way to prevent evaluation of arguments, but closures donâ€™t give us exactly this, because they force us to wrap the arguments in closures before we pass them, which gives a distinctly different syntax than language primitives.

Oh, oops. Sorry for the mix up, I must've skipped past the "new syntax" part :/

What is a macro (in a language like Lisp)? The only difference between (simple, non-hygienic) macros and procedures in Lisp like languages is that a macro doesnâ€™t evaluate itâ€™s arguments but rather passes the unevaluated s-expressions to the macro so they can be sliced, diced, chopped up, manipulated, and transformed into something else. This is, in a sense, a form (or type) of lazy evaluation. Okay, I lied; macros arenâ€™t really a type of lazy evaluation. But, itâ€™s that property (arguments are not eagerly evaluated) lazy evaluation in Haskell shares with macros that eliminates much of the need for new syntax in Haskell.

I know. Manipulating code implies lazy evaluation--you can't manipulate code that's already evaluated ;-)

### Closures by themselves don't

Closures by themselves don't give us the same flexibility that lazy evaluation in a language like Haskell gives us because eager languages are eager. To gain the same ability for syntactic extensions that we automatically get with a lazy language like Haskell, we need some way to prevent evaluation of arguments, but closures donâ€™t give us exactly this, because they force us to wrap the arguments in closures before we pass them, which gives a distinctly different syntax than language primitives.

No, that's just not true. If your strict language doesn't have any constructs with implicit laziness, then the use of "blocks" or anonymous procedures to inject laziness simply does not look distinctly different. To make it practical, you just need a sufficiently light syntax (even lighter than in Haskell) for anonymous procedures. AFAIK, in Smalltalk (I've never programmed in Smalltalk), a strict language, conditionals are implemented as procedures and use of conditionals and other control constructs looks exactly like procedure calls.

### compile time vs runtime

Vesa Karvonen: AFAIK, in Smalltalk (I've never programmed in Smalltalk), a strict language, conditionals are implemented as procedures and use of conditionals and other control constructs looks exactly like procedure calls.

Yes, Smalltalk conditional syntax is the same as syntax to send a message sent to a block, which block executes or not depending on the test. Typically a Smalltalk compiler treats such block messages as special, the same way Lisp uses special forms to express conditionals for code that might execute or not; the fact code looks like a Smalltalk message or a Lisp function call doesn't require this occur at runtime.

It's best to think of these messages and functions as being processed at compile time. Syntax doesn't usually express when syntax gets processed. The same thing also occurs older languages like C.

For example, in C the syntax for 'if' and 'while' looks a function call taking one argument. But it's understood a compiler emits inline code at compile time for these bits of syntax; you can't define a function named if() or while() and get them to replace a standard runtime dispatch to such functions, because they aren't really functions.

Similarly, there are some messages sent to blocks in Smalltalk you can't override in a subclass, because the compiler intends to statically inline the standard implementation. (Well, technically you could override such block methods and a compiler might emit a runtime message dispatch; but this would evaluate args before dispatch, defeating lazy evaluation of code as in the base version.)

Most Smalltalk semantics are expressed as a messages sent to some object. An implementation could -- but most don't -- express method definition as syntax using a message to "the compiler" saying "here's some code to compile; please associate it with this class and this method name". (I did this once in a Smalltalk compiler; works fine of course.)

Some languages use different syntax for compile time (delarations and definitions) and runtime (executable code). But in principle the same syntax can be used for each all the time, as long as it's understood which parts occur when. There are pros and cons for each. Special compile time syntax increases the apprehension that results are magic effected by the language as a system. But you could just see compile time syntax as messages to the compiler and runtime.

### An expressive type system + syntactic sugar

What is needed is pretty easy : an expressive type system so that all the actual implementation can be done as a library, and some additional syntactic sugar in the compiler to integrate it into the language. But I think there is a limit to the amount of syntax an user can handle.

It's more of a design choice. Is the among of work required to add syntactic sugar worth it compared to the time needed by the user to learn it and the number of time the feature will be used ? Does that makes such a big difference compared to using the feature as a library ?

I guess that a good language will start with a lot of things in the standard library and slowly and carefully add syntactic sugar for the most interesting features without breaking compatibility or turning the syntax into something like Perl.

### Thats the standard way to do it...

"What is needed is pretty easy: [...]"

... but if it is really that easy, why hasn't it worked yet?

The time to create the 'sugar' (I won't call it this, because it's not all sugar) would be comparable to writing a solid and usable lib - but it's much more flexible. Just compare something like "new ArrayList(new int[]{1, 2, 3})" with something like "[1, 2, 3]".

To try to add features incrementally to a language would be inevitable, but I think it neccessary to start with an already quite comprehensive feature-set to prevent running into extensibility problems. The more you have to start with the less you have to add later with the risk of breaking compatibility.

### [1,2,3] vs 1:2:3:[] is

[1,2,3] vs 1:2:3:[] is entirely sugar in Haskell, your point? Lists're pretty fundamental to lazy functional programming due to them encoding most loop-like things, yet they're still just a datatype like any other.

### Philippa's right.

Philippa's right. Also:

Just compare something like "new ArrayList(new int[]{1, 2, 3})" with something like "[1, 2, 3]".

Yes, let's compare them. Are they the same? You can define them to be equal, of course, and then every list will be an array. But then you will not be able to use the latter syntax for, say, linked list implementations of lists. Indeed, in Haskell the latter syntax denotes a linked list, and you would need to apply some function to [1,2,3] in order to convert it to a value of an array-based implementation. So, conversely, the constructor new ArrayList is not superfluous in Java, but extra information which chooses one implementation among many.

What about the int part? Again, we can define things so that brackets always introduce lists with integer elements, but then we need another syntax for lists with character or floating-point elements and so on. Or we could try to infer the types of the elements... and following this path gets you to type inference.

To try to add features incrementally to a language would be inevitable, but I think it neccessary to start with an already quite comprehensive feature-set to prevent running into extensibility problems.

Yes, you are quite right, and that is why modern languages come equipped with features like type inference and recursive datatypes. But merely adding syntax does not buy you much; in this case, for example, it only addresses a tiny set of issues, which are not the real issues at all.

The reasons for making languages "minimal" are the same reasons that we make abstract datatype and class implementations minimal. You are reducing a large set of rules which are difficult to keep in your head at one time to a small set of rules which are easy to reason about. Also, by reducing the size of the language, it is easier to replace one implementation of it with another, since you only need to implement a generating subset. If things become more inconvenient by reducing them this way, then there is a deficiency in the abstraction capabilities of the language; anything which can be defined and put in a library should behave just as if it were built into the language.

In Java, before generics, it was impossible to define safe collections which could have elements of any type. This was such a deficiency, and generics addressed it. In functional languages parametric polymorphism is the analagous feature.

### Meta-programming

Or we could try to infer the types of the elements... and following this path gets you to type inference.

And also to a numeric tower, or some other form of subtyping/coercions for numeric types. Or maybe different syntax for literals of different numeric types.

In Java, before generics, it was impossible to define safe collections which could have elements of any type.

It is still impossible. Generics are more like a convenience for cooperating programmers than a protection from malicious or dumb parties.

But more to the point - I also find it useful to have a definition of language minimalistic when it comes to meta-programming of all sorts, including, but not limited to, code generators, validity checkers, compilers, refactoring tools, and (even meta-circular) interpreters.

[on edit: we had a similar discussion here (look for "humane")
I guess it all boils down to - the simpler the PL, the simpler it is to talk about it, including: formal papers, informal chats, other PLs, or the PL itself]

### There is of course a

There is of course a difference between both. But thats my point: I don't want a language with some fixed 'syntaxified lib', but a language where the often used things can used in a compact, readable and safe way.

I've not yet talked about concrete ideas, how to build such a language, but of course it's not a simple task, because you need to integrate all those features into a common framework instead of simply defining some standard functions or classes.

The [1, 2, 3] in the above example could only say 'an unspecified container with entries 1, 2 and 3'. If the language is statically typed, the interpretation would depend on the context: If you assign the [1, 2, 3] to an int-array, it's initialising the int-array with those values. If you assign it to an string-array, you get an error. If you assign it to a int-list, it's initialising the list. etc.

But merely adding syntax does not buy you much; in this case, for example, it only addresses a tiny set of issues, which are not the real issues at all.

Thats true in principle - but not in practice, where you have to use those things very frequently. And of course I don't want to stop with the above example, there are lots and lots of those things.

In the last years much effort went into designing languages which are as extensible as possible. Reflection, Macros, DSLs etc. Why not stop wasting lots of effort and simply creating the language right of the bat instead inventing complex extensibility schemes which only work to a certain degree and complicate the language tremendously?

### I've not yet talked about

I've not yet talked about concrete ideas, how to build such a language, but of course it's not a simple task, because you need to integrate all those features into a common framework instead of simply defining some standard functions or classes.

The [1, 2, 3] in the above example could only say 'an unspecified container with entries 1, 2 and 3'. If the language is statically typed, the interpretation would depend on the context: If you assign the [1, 2, 3] to an int-array, it's initialising the int-array with those values. If you assign it to an string-array, you get an error. If you assign it to a int-list, it's initialising the list. etc.

I assume you mean it could only say that in your hypothetical language, as the meaning in Haskell is clearly not that. Funny thing is, if you had it "pre-fold" by returning something like Sequence a => a where Sequence has methods cons and nil (and then the instance for [] would have cons as (:) and nil as [], to give the obvious example) then you'd have exactly what you're requesting only using libs and a tiny speck of syntactic sugar again. Your talk of "assigning it to an int-list" is, er, revealing though.

### It's quite possible that the

It's quite possible that the above mentioned is possible in some existing languages. In Lisp it certainly is by using some macros.

Most things are already implemented in some existing language. But not all those things together. Some languages have build in complex numbers, others have lists and hashes, others can directly create xml trees, define guis or create scanners and parsers. But where is the language who trys to do all those things (and lots more)?

And even better: Why not build abstractions based on those things? Have some kind of 'collection language'. Haskell for example has it with it's list comprehensions. But why stop there? Why not also a syntax for folds and zips? Could be even more easy to use.

Most languages do thiose things. Haskell for example has a special syntax for easy use of monads. And why not, it really makes things easier. So why stop there?

### We do all these things in

We do all these things in Haskell already, we just don't define new concrete syntax[1] to do so - we go straight to an abstract syntax tree. In the case of scanners and parsers, the resulting code is as easy to read as that of any specialised tool I've seen. Lists have sugar, hashes benefit from some additional generalised sugar to pattern-matching. XML's very much a matter of taste, personally I'm happy using the ordinary function syntax for it. GUIs tend to be much the same, assuming you're not going to just let a designer app do the job.

Sometimes the extra sugar's good, but it's best when it's been designed with generality in mind! How do we get this? With a minimalistic core and an insistance on sugar with a high power-to-weight ratio.

You can always ask "why stop there?". "Because we can't do this in a sufficiently useful manner" tends to be a good answer... what you propose is a language full of everybody's favourite kludges of the day, all conflicting so if it turns out not to do things the right way you can't do them the right way without considerable effort and pain. We've got plenty of languages like that already...

[1] This may be a slight lie, there're some creative abuses of operator overloading and type classes floating around

### Tuples.

A language should allow tuples as first class entities in order to be able to offer syntactic sugar like the one you mention. For example, it would be nice if in Java I could do this:

Object list1 = List({1, 2, 3});


The List class would have a constructor like this:

List(tuple t) {
for(int i = 0; i < t.length; ++i) {
}
}


Nested tuples could be a replacement for S-expressions:

Object myData = Tree({"a", {"b", "c"}});


S-expressions declaration could spawn a "new" paradigm for writing complex trees (for example UI trees or XML trees):

Form loginForm = new Form({
"My Form",
{new Row({new Button("ok", new ActionListener() {void exec() { loginForm.close()}} ))},
{new Row({new Button("Cancel", new ActionListener() {void exec() { loginForm.cancel()}} ))},
}
});


### Type inference

Like Frank told, in order to get this working nicely, you need at least some type inference. First class syntactic tuples is less useful than first class tuple types.

For instance in your List constructor :

List<T>(tuple<T> t) {
for(int i = 0; i < t.length; ++i) {
}
}


tuple<T> being defined as (T*)

### Sequence constructor

I think it is worth noting that the next revision of C++ should include a "sequence constructor" just for this. Sadly I don't think the design committee has relased an explanation on how that is supposed to work.

### A language should support first class tuple types.

For example, a function that accepts a pair of ints could be written like:

void someFunction(tuple t);


All languages have the concept of tuple, at least at function/procedure/method level: the parameters of a function are a tuple, usually constructed on the stack.

### To try to add features

To try to add features incrementally to a language would be inevitable, but I think it neccessary to start with an already quite comprehensive feature-set to prevent running into extensibility problems. The more you have to start with the less you have to add later with the risk of breaking compatibility.

What is necessary to prevent running into extensibility problems (or at least minimizing them) is not a comprehensive "feature-set", but an expressive language. As soon as you start needing to define features of greater expressiveness than your language you massively lose extensibility*. If you have a highly expressive language then in all likelihood (and as evidenced in practice) you will be able to easily express this "comprehensive feature-set" within the language; nice syntax then merely requires a localized syntactic sugar (something like Camlp4); "common ground" simply requires defining a large standard library (as others have implied). The point you have remaining is the compiler being able to make assumptions about language constructs that it would not be able to make about libraries. In my opinion, a much more interesting route to achieve this is to be able to specify properties about your code and perhaps have some further support to guide compilation (particularly optimization). Even today this can be partially done in some "widely-used" (in the LtU sense, i.e. including Haskell but not Epigram) languages.

That seems to cover all of your objections, but may bring up a new one: the highly expressive features may be complicated to understand and unnecessary if we had language support for everything that programmer's "need". If you take this view, you are essentially explicitly giving up some extensibility and stating that we can bake in support for the vast majority of the tasks programmer's do.

* Haskell suffers from this, but it is mitigated by a (de facto) fairly uniform and structured method of extension.

[On Edit]
A problem along the lines of the last paragraph (not the "footnote"), is that the expressive features may not play well together; this is part of why I added the "(or at least minimizing them)" to the above. This suggests that it would further be prudent to be able to control and compartmentalize the features. Haskell's approach strongly lends itself to this which is probably a further reason why many people don't find Haskell problematic to use despite it's relative lack of expressiveness.

Again, I find research into how best to go about this much more interesting than some particular solution, which in all likelihood would be based on this research anyway.

### TO WHOM is the question directed?

When you're asking an extremely general design question, perhaps the first thing to do is find an appropriate audience. 2 questions I would ask are

2. what resources was the designER (perhaps a team) given

so IMHO "why deosn't so and ao include database access functionality?" is not a very helpful question to ask of EVERYONE.

It would be better to find someone who was tasked with designing a laguage and a database access scheme, and chose not to include anything in the language to aid the database access.

Has anyone been in a position where they've been required to design a language that integrated

regexes
database
hash tables
HTML/XML

and so on ....

Maybe your beef is more with those who set specifications than those who actually do the designs? (I admit this may be the same people wearing different hats).

### I think that most people who

I think that most people who are interested in language design are reading this blog. And I've read here about so many clever schemes to create minimal languages that I asked myself: Doesn't really anybody consider the possibility that is in fact the wrong way if you want to create a really usable language? How many people complain that languages like Haskell or Lisp are so seldom used, while ugly languages like Java or C++ dominate the market?

The most frequent design imperative seems to be: How can we create a language by using the smallest number of abstractions. While thats in interesting task (and neccessary if you want to explore the usefulness of new abstractions), computer languages are also tools to get some work done. But this 'getting the work done' part of a languages seems to be considered a mere distraction from the arcane task of creating type systems and new abstractions.

But identifying all commonly required usage patterns and integrating them into a single language could be an interesting and challenging task too. So I'm curious, why only few people see to recognise that.

### Because what Edsger

Because what Edsger Dijkstra sain in his Turing Award Lecture 33 years ago is still true.

"I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language -our basic tool, mind you!- already escapes our intellectual control."

### maybe the mud ball is simpler

You're raising the same question the thread originator did.

Say you need a language plus a library to solve a problem.

Suppose that instead you included the library in the language, and in the combining you could change both language and library such that the whole is simpler than the parts.

karsten_w assumes that this combining will yield a simplification (I'm not so sanguine - if you contaminate the base language with complications meant to simplify libraries additions, the base language eventually must become more complex - the term "parasitic complications" comes to mind).

### Preventing those "parasitic

Preventing those "parasitic complications" would of course be a cenral point in designing such a language. But those complications exists even in normal libraries - people are simply asccustomed in ignoring them, because it's 'only the lib' and not the language itself.

Lets look for example to standard ADTs: Sets(implemented via trees, arrays, hashmaps), Lists (single/double-linked), Arrays, Maps (misc. implementations), Stacks. And then there are slight modifications of those: A maps could have a LRU feature, use weak-refs as keys, arrays could be sparse, etc. Of course it would be stupid to create a new Syntax for each of those ADT.

Instead use some kind of atrributing/hinting-system and let the compiler figuring out the rest. Ok, this would be difficult with dynamically typed languages, but in a static typed language, you can simply extends the declaration. Maybe declare a map like this:

map: Int[lru:100, weak-key:String]

and a list like this:

And if you assign to it, you could use

list = [1, 2, 3]
map = ["a" -> 1, "b" -> 4]

without a need to remember what the real implementation of 'list' and 'map' is.

And if someone uses:

list.count

the compiler recognises that and maybe creates a count-field for the list. And if you use

list[10]

the compiler can warn that this operation in inefficient for a linked-list - or even use a array list automatically unless you specifiy the wanted implementation explicitly.

Choosing the implementation depending on usage patterns is starting to become more common with dynamic compilation and vm-based execution. Why don't extend it a bit? But by defining everything explicit in a simple lib, the compiler don't have those optimizing opportunities, because he can't simply 'understand' the code in a lib.

### minimality/simplicity is a principal principle of good design

in almost all design schools (except those that deal with ornamentation, and even there one finds many exhortations to minimalism and admonishments against "overdoing it", ie, including too much ).

Note, for example, that Java advocates are forever apologizing for having to type too much ("the editor takes care of it for you".)

the same with the authors/users/advocates of visual wizard tools.

(a very general answer to an question that I said was too general).

### But simplicity has it's

But simplicity has it's limits. Why would everybody use a sedan if minimality is everything? Why isn't everybody driving a minimalistic sports car instead?

My question seemed to be general, but the topic is general too. The goal of most language designs is minimality. There are few slight exceptions (for example Ada, or certain 'scriping' languages), but even those don't attempt to really create a really comprehensive language and seems to execute themself for every additional language feature.

I think it's some kind of 'dogma' that a language have to be as minimal as possible and implement as much as possible in the libs (Haskell is a perfect example for this: Very mininmal core, lots of stuff in the standard-prelude which is part of the language in many other languages etc.).

I think it's some kind of 'dogma' that a language have to be as minimal as possible and implement as much as possible in the libs (Haskell is a perfect example for this: Very mininmal core, lots of stuff in the standard-prelude which is part of the language in many other languages etc.).

I recently was refreshing myself on Haskell's syntax, and I thought to myself "man, this is complicated". So I find your comment very strange.

For example, patterns, guards, case, and if/then/else are essentially four different ways of writing a conditional! List comprehensions? The "dot dot" notation? 9 different pattern types? String, list, and tuple support? Let vs where? I think I'll stop here.

Even if some of these are defined in the standard prelude, as far as the user is concerned the syntax is built-in.

### Haskell is minimal in the

Haskell is minimal in the sense that it can implement much things in the library other languages have to put into the syntax.

And I think that the mentioned complexities result from this: To be able to have enough power to implement very basic things in the language instead of make them part of the language, you need a more complex language.

It's for example much more easy to implement complex numbers right into the language (together with floats, ints and with fixed coercion and conversion rules etc) then building a type system which enables you to describe all those coercions and conversions in the language itself.

### Almost everything Jeff lists

Almost everything Jeff lists there is sugar rather than core language complexity. The type system turns out to be desirable for many other reasons, not least of which is the likely failure of the language designers to do exactly what the programmer wants in all situations...

### But those features are only

But those features are only needed to make things usable even if they aren't defined in the language itself. In Haskell they created a language which is able to express lots of basic functionality via libs. Then they noticed that it's not that easy to use and they added lots of sugar to mask that.

So why not simply create a more simple language in the first place and build in all the required basic functionality instead? Sure, if someone invets some fancy new data type you can't simply modify some library to use it as simple as build-in data type. But I think that this simply don't happen often enough to really count as a reason not to do it.

### But those features are only

But those features are only needed to make things usable even if they aren't defined in the language itself.

Unless your language is designed with a prescient knowledge of all possible domains, it's necessary for things to be usable even if they're not part of the language.

In Haskell they created a language which is able to express lots of basic functionality via libs. Then they noticed that it's not that easy to use and they added lots of sugar to mask that.

So why not simply create a more simple language in the first place and build in all the required basic functionality instead?

That's not a "more simple" language at all, it's quite the opposite!

Much of the sugar can be stripped from Haskell without making the language unusable, too - a significant amount of it is there to support "coding style" (in a sense analogous to "writing style"). Typing aside, you could not improve the list support by any means other than adding more syntactic sugar. This has much to do with why most of the list support in Haskell is of a distinctly sweet nature...

### It's useless to compare a

It's useless to compare a minimal language without it's libraries to a 'fat' language with much more build-in functionality. The libraries always add to the complexity of a language, so you have to consider them too.

Even in a simple language like Pascal its quite easy to add additional functionality by creating libraries. Its always possible, only the degree of 'similarity' to build-in features and the scope of possible language extensions varies from language to language.

By using lots of build in abstractions instead of requiring language features which enables one to implement as much as possible in the language itself the language can be more simple. Sure, the language as a whole will be more complicated than the minimal language itself - but if you add the (neccessary) libraries to the language, even the 'fat' one will win, because this language need less sugar and tricks to enable easy implementation of complex things via libraries.

Haskell has added some sugar to make lists easy to use - but lists are a really natural data structure for every functional language so that's not really difficult. But what about arrays? Sets? Multidimensional sparse arrays?

Please don't get me wrong, I don't want to put Haskell down, it's a nice language with many interesting features. But I think that the primary goal for programming languages should be usability. To enable the programmer to build complex applications as quick and easy as possible. Does Haskell really accomplish this task?

### IME, it does pretty well at

IME, it does pretty well at it and many ills can be fixed by rigging up bindings to external libraries via the FFI. There're things it's less good at, especially if you're forced to stick to Haskell 98, but this is what research is for - monads are most definitely a feature from where I stand.

I've already given you some examples for how further things could be sugared with surprisingly little effort. If you want to start throwing around new things you'd like language-level support for, I'd like to see your idea of a good DSL for manipulating them before looking at including them into a larger language.

Your comments about requiring "sugar and tricks" to implement libraries just don't hold in my experience. Where they exist, the tricks're all ones I'd be exceedingly disappointed not to see in your 'fat language' anyway.

My experience is that Haskell libraries tend to be simple, and considerably easier to work with than those I've used in fatter languages. The mere fact libraries can be factored out into separate modules makes a huge difference to the overall complexity.

### Interesting Question!

karsten_w: But I think that the primary goal for programming languages should be usability. To enable the programmer to build complex applications as quick and easy as possible. Does Haskell really accomplish this task?

I can't answer that last question. But with respect to ease of use, I have a relatively concrete metric that I apply, and that's orthogonality within the language. Languages that I appreciate don't necessarily have shallow learning curves—O'Caml doesn't, Common Lisp doesn't, Oz doesn't, C++ most certainly doesn't—but the ones that I choose to use on my own time, vs. using them for work, have the property that, once I learn a principle about them, it stays learned with a minimum of "but" and "except for" in any description of it. (Given this criterion, it should be obvious that I appreciate C++ and Common Lisp for different reasons than orthogonality!)

So suggestions of a "kitchen sink" language—particularly ones that sound as if it really is just a matter of tossing features together—tend to scare me. They sound extremely naïve when they don't present a very specific, concrete plan for achieving orthogonality or make reference to highly featureful languages with good orthogonality (e.g. Oz, O'Caml, Haskell, Icon, Smalltalk...) or featureful languages with poor orthogonality (Perl, C++, Java, Python, Ruby...). Thinking that you're going to design a good language ex nihilo is a shocking conceit, a hubris of Greek mythic proportions.

### What do you mean by

What do you mean by orthogonality?

### you may want to find the original RISC vs CISC papers

for somewhat-related research that may analogize well to your question

Some of the original RISC research documented

1. how frequently esoteric instructions were used
2. the direct cost (CPU transistors) to implement rarely-used instructions
3. some indirect cost (loss of resources, clock slowdowns) for the simpler, frequently-used instructions

the one paper I'm thinking of was about a FORTRAN compiler

### Didn't they both "win" in the end?

I mean look at x86, it still has a CISC instruction set interface, that gets translated into a RISC-like instruction set under the hood. Isn't that exactly what the OP is advocating anyway?

### Not really.

The CISC interface is most likely only there for legacy compatibility. I'm sure the engineers would love to strip away the CISC->RISC translator.

### CISC->RISC

Actually, no.

CISC compressed commands require less equipment to store. And CISC->RISC translator gradually becomes smaller and smaller because of it's constancy.

### I'm no hardware engineer

but enlarging a cache(*) would seem completely trivial compared to the complexity of a CISC->RISC translator in the instruction pipeline. (Maybe a bigger cache is more expensive, I dunno.) But as I say I'm no hardware engineer, so I'll be happy to concede that point.

(*) System RAM sizes have increased so much in recent years that code size is largely a non-issue except for cache.

### Note that we're talking

Note that we're talking about x86 which is not only CISC, it is ugly, crufty, etc..

Yes CISC reduce instruction cache usage, but there are other ways such as ARM Thumb2 which its 16&32 bit ISA which manage to have nearly the same density as x86 with a simpler decoding.

### I'm sure the engineers would

I'm sure the engineers would love to strip away the CISC->RISC translator.

Not just the engineers...

[Exercise to reader: (Attempt to) write a half-decent code generator (in the compiler writer sense) for a language targeting the x86 and targeting a RISC architecture.]

### Six of one, half-a-dozen of the other...

Library design is language design.
Language design is library design.

Bjarne Stroustrup, of C++ fame (or is it infamy).

If the language designers make a standard library that's difficult to use, will a built-in of the same functionality be any easier to use?

On the other hand, if the language designers have a build-in that's easy to use, couldn't that same ease be built into the standard library?

Eric

### Not every language design is

Not every language design is good. But the design of a lib is always constrained by the language itself. So if a lib is ugly maybe thats only because of the language itself?

On the other hand, if the language designers have a build-in that's easy to use, couldn't that same ease be built into the standard library?

Thats exactly the thing I'm questioning here. Maybe it is possible, but besides Lisp I don't know a about a language which really is able to do it. And even if Lisp could do it, it doensn't.

So why not stop trying and simply design a language in a 'new' way: Make it simple to use, with a rich syntax and lots and lots of sugar and buildins. I want for example a simple 'print': No complexities like importing packages, creating stream-objects or using monads only to write something somewhere.

### What is complex in a library print?

For example in my language Kogut:
WriteLine "2+2 = " (2+2);
No importing needed, no stream objects, no monads. But it's just a function. It's not worth special syntax nor special evaluation rules.

This is true for many features. And for most for which it's not true, a macro suffices. Putting things in the core language doesn't make them significantly easier, or often any easier at all.

### And if you want to write to

And if you want to write to a different kind of stream? Use a different encoding? Modify the buffering strategy?

Macros are a way, but only if you have a very powerfull macro language. Lisp is very good here, because the Macro language is Lisp itself. But Macros have the problem that it's difficult do prevent people from defining their own languages which leads to worse maintainability compared to a fixed language.

### I don't understand your point

If you want to do more complex things, the combination of functions needed to archieve that becomes more comlpex, obviously. It's all doable when the language is sensibly designed, no matter whether these features rely on builtin syntax or pure library interfaces.

Usually it's easier when they rely on pure library interfaces because custom abstractions needed for non-standard combinations fit into conventions established by standard things.

See Guy Steele's Growing a language. Notable quote:

APL was designed by one man, a smart man—and I love APL—but it had a flaw that I think has all but killed it: there was no way for a user to grow the language in a smooth way. In most languages, a user can define at least some new words to stand for other pieces of code that can then be called, in such a way that the new words look like primitives. In this way the user can build a larger language to meet his needs. But in APL, new words defined by the user do not look like language primitives at all. The name of a piece of user code is a word, but things that are built in are named by strange glyphs. To add what look like new primitives, to keep the feel of the language, takes a real hacker and a ton of work. This has stopped users from helping to grow the language. APL has grown some, but the real work has been done by just the few programmers that have the source code. If a user adds to APL, and what he added seems good to the hackers in charge of the language, they might then make it be built in, but code to use it would not look the same; the user would have to change his code to a new form, because in APL a use of what is built in does not look at all like a call to user code.

You write:

But Macros have the problem that it's difficult do prevent people from defining their own languages

Languages designed for preventing people from doing bad things are pathetic. The only sensible approach is enabling people doing good things. Cultivating good code, and forgetting about bad code.

### foster the good, forget the bad

Qrczak: Languages designed for preventing people from doing bad things are pathetic. The only sensible approach is enabling people doing good things. Cultivating good code, and forgetting about bad code.

Nicely said: foster the good, forget the bad. Rinse and repeat.

Now if only you could get folks to forget the bad. :-) Coders often have so much short term and long term memory, both, that a sensible strategy of forgetting goes deeply against the grain of endemic packrat tech hoarding. Also, bright folks are very prone to obsessive compulsive disorder when it comes to deconstruction of terms and logic. For example, I can predict someone desperately wants to attack the word 'forget' by bringing out whatever free associations come to mind, just for the joy of dissection.

I wish we used koan oriented design more often: take a phrase like "foster the good, forget the bad" and squeeze out useful lessons without obsessing over ways it doesn't apply.

[Edit: oh yes, and prevention is pathetic. Isolation is more practical. Instead of forbidding cooking because heat is dangerous, just provide a kitchen and a stove where the finger burning activities belong.]

### Too many unsolved problems, too much change

Today most usage patterns are commonly known and while there are lots of them, the total number of those patterns seems quite manageable.

This seems false to me, on a lot of levels. Today, a lot of usage patterns for computer programs certainly are known. A quick scan of just the practical pattern books on my shelves puts the number of common usage patterns in the low thousands, far too many to build into a reasonable programming language. This is particularly true since so many of them conflict with one another, and have to be carefully firewalled apart to avoid catastrophe. Hell, the number of different encapsulation usage patterns is probably in three figures.

Moreover, a lot of these usage patterns, while common, are still pretty bad. This reflects the facts that computer programming is an immature craft, that our ability to train adequate numbers of even journeyman computer programmers is poor, and that resource restrictions of all sorts still require many unfortunate tradeoffs to be made.

For example, it was thought by some a decade ago that the GoF Singleton pattern was central enough to object-oriented software development that direct language support for it would be a good idea.
It would be easy: just an annotation on a class that it had precisely one instance, and some clever way of referencing it. This made some sense given the application domains of the time, and the restrictions on CPU speeds and process spaces that many applications had to cope with. Nowadays, the Singleton pattern looks a lot less sensible, and even pretty quaint. Pretty much every computational resource is remotable, fallible, and upgradeable, and even wristwatches have enough computational power and connectivity to take advantage of those facts. In that context, the idea of any computational object being coded as globally unique and process-eternal is pretty quaint, and is already being discussed for "anti-pattern" status.

On the other hand, the value of libraries is set to explode, but that's a topic for another time.

### Most of the conflicting

Many of the conflicting patterns are paradigm-dependent. You have to use different patterns for functional and oo-programming. The other conflicting ones often tackle the same problem with different methodes where no single method is really superior.

If you support pattern A in the language and not the alternative pattern B, the A is used and thus no problems arise, in fact the limitation to using A makes the design process easier and the code more maintainable. Sure, maybe B would be a little bit better for a certain problem, but who cares as long as you can A also and have direct support in the language?

Most bad patterns are a result of the limitations of the implementing language. An example is the visitor-pattern, which only exists because most oo-languages have no pattern matching. But if a language have pm, why would anybody use the visitor pattern instead? Just put support in the language instead require the programmer to use bad patterns.

True, lots of patterns are dependent on the underlying capabilities of the hardware. But thats also a reason to put as much support a spossible in the language instead of require to write it in the code. In your singleton-example a later revision of the compiler could make singletons simple remoteable or use caching and recalculation, depending on the usage-patterns of the singleton-object. If you have to code singletons explicitly the compiler can't change that.

### A mismatch of timescales and sums involved

In your singleton-example a later revision of the compiler could make singletons simple remoteable or use caching and recalculation, depending on the usage-patterns of the singleton-object. If you have to code singletons explicitly the compiler can't change that.

It's not that programmers want fancier singletons, it's that there are fewer and fewer valid uses for any sort of singletons, no matter how fancy. If you put singletons in your language in 1990, you'd still be stuck with them in 2006, when they are looking kind of stupid. The history of software architecture is filled with answers like that which were good at the time but have since been outdated.

Requirements change quickly, compilers change infrequently, and viable languages change very, very slowly. Worse, the changes to viable languages usually need to be backward-compatible (or else the language quickly ceases to be viable). I would probably have to specifically recommend against adoption of a non-minimalist language like you describe, as it would involve investing a large amount of effort (and my sponsors' money) in something destined to become quickly obsolete.

### I doubt that

I doubt that requirements are really that fast changing anymore. Most basic things have already settled and if you put them in a language it's highly probable that you need them over the next years too.

And if things really changes to much: Use a different language. If your design uses lots of stuff nobody uses anymore, it's probable that you have to rewrite it from scratch or maintain it in it's flawed state.

And like libs and frameworks change, languages can change too. If you want to migrate some app from Struts to Spring you have to rework lots of stuff too. Why is this more difficult then migrate a programm from an earlier release of some language to a later?

But by 'putting patterns into the language' the problem isn't that severe, because the compiler can 'refactor' your programm to a certain degree himself. For example by automatically creating code to distribute those pesky singletons over a network in a later compiler revision which is more network aware. If you've written everything explicitly in your code, you have to rework that by yourself - a much more difficult task.

### LOL.

Quoth the parent:

Most basic things have already settled

Ha! Why, then, is it that we have dozens of object serialization libraries, dozens of database interface libraries, dozens of database types even, hundreds of different implementations of various ADTs, etc...?

Granted lots of people/places suffer from the NIH syndrome and we could probably stand to lose many of these implementations, but remember that many ADT (just to pick one category) implementations which support the same interface may have conflicting characteristics. Example: Implementation A may support O(1) lookup, but require O(n^3) space while implementation B may only support O(log n), but require only O(n) space. Which you end up choosing depends heavily on where your code is actually going to be running.

So we've hopefully established that it is impossible to trim away many of these different implementations of ADTs, DBs, etc. Otherwise your language will not make anyone happy. To see why, consider this simple example: Alice may need ADT implementation X while also needing DB implementation A, while Bob may need ADT implementation Y while also needing DB implementation B. Unfortunately you chose ADT implementation X and DB implementation B satisfying nobody's requirements. This gets exponentially worse as the number of requirements increases.

This reasoning can only lead one to conclude that you are (effectively) suggesting that your SooperLanguage should contain dozens of different implementations of, say, the Tree ADT. I'd be very interested in how you plan to do that without the language implementation collapsing under its own weight. How are you going to a) write all that code, and b) test all that code? With libraries, the code writing and testing is distributed to library implementors who do not need to know anything about the language implementations's internals to be able to write their libraries.

In short: You're simply wrong and only need to think about what you're suggesting a little bit more to realize it.

(Why do I get the distinct feeling I've been trolled?)

because discussions are off-topic here.

This reasoning can only lead one to conclude that you are (effectively) suggesting that your SooperLanguage should contain dozens of different implementations of, say, the Tree ADT. I'd be very interested in how you plan to do that without the language implementation collapsing under its own weight.

By inventing a good abstraction instead of providing lots of similar but slightly different implementations. This abstraction can then translated by the compiler to a certain implementation by letting him choosing it via some heuristic or via hinting.

I may be wrong, but I'm still waiting for someone to show why. Maybe the problem is talking to much on a metalevel instead of presenting a concrete implementation - which is of course much more difficult. Before I spend months or even years of time to try it, I simply wanted some feedback.

### This has nothing to do with

abstraction or interface. To satisfy all the requirements programmers have you still need to implement N types of Tree ADTs (all with the same interface), N types of DBs, N types of sets, etc. since they have conflicting performance/space/cache locality/... charateristics for different operations(*). There is no single type of tree which can satisfy everyone, so you have to implement all types -- this follows from the "combination problem" I described in the parent post with the example of Alice and Bob. Implementing everything isn't feasible. To see why this is so simply imagine a "complete" Perl. It would have to include almost everything(**) on CPAN to satisfy even a fraction of developers since their needs are so different.

You obviously don't seem to be listening to me, nor the many other people who've pointed out other flaws in your reasoning, so I'm done.

(*) No, your compiler is not going to be able to discover new data structures to fulfill some arbitrary requirement set by the programmer. Not in any foreseeable future. So you cannot wiggle out of saying "the compiler will create a data structure to order". A human programmer still has to provide the basic implementation even though there may be tunable parameters which a compiler could deduce.

(**) Modulo Not Invented Here syndrome.

### Volume

Last answer because discussions are off-topic here.

To clarify our rationale: in the first 36 hour period after you posted your topic, the number of comments you posted made you the most prolific poster on LtU not just for that 36 hour period, but for the past week.

If you examine your own comments, I think you'll find that many of them are simply clarifying or adapting your own position, and in quite a few cases, speculating about very sophisticated solutions as a way to make the approach you're suggesting viable.

You've received plenty of feedback. What we're asking is that you demonstrate that you've put some effort into thinking about the topic and incorporating the feedback you've received, and provide something more solid to form a basis for further discussion. You may find that taking the time to clarify your own thoughts on the matter by writing about them will help answer some of the questions that you feel haven't been answered here.

### Many feature in the language --> implementation problem

Say you put associative array in the language, there are different way to implement associative array with different memory usage/CPU usage tradeoff: what if the built-in implementation doesn't fit your needs?

You then have to implement your own feature, but usually it will be 'less nice' to use as you don't have the same syntaxic sugar at your disposition..

That's why the trend is improving the level of 'syntax sugar' that the user can use and putting everything in libraries.

### false estimation

For standard cases: Use hinting or a clever compiler with runtime profiling. I assume that there will be no really new algorithms in the area oft ADTs for some time (and if: just put them into the next compiler revision and be able to use it automatically). Has really anybody used other than the standard hashes/lists/quicksorts etc in the last time? Frequently?

I thing thats some part of the fallacy which lead to those lots of minimal languages: The fear that you need something the language don't support. But is this really happening in practice? Or aren't those few basic datatypes everybody uses everyday not really totally sufficient for the huge majority of situations?

And if you really need something else at some time, fine, just implement it via the language. It will look more ugly and may be more difficulty to use than a 'self-made' datatype in your favourite minimal language which is designed with the goal of extensibility in mind, but who cares if it happens only few times in a really big program?

Wouldn't it be much better to support those 99% standard-use-cases as best as possible, even if it makes the rest maybe 3 times as complicated?

### Numerics and Algebra

To be honest, I wouldn't want most people from the software pattern camp design language features (as opposed to library features) that deal with numeric problems. Just have a look at the mis-designed complex number C extension or C++ class (as an example of a language that claims to do it right: http://www.digitalmars.com/d/cppcomplex.html).

When it comes to numerics and algebra, I rely on numeric and algebra experts, not on language designers. And I'm tired to wait for languages like D to become mainstream, let alone to wait for language-design inclined experts for tensors, quaternions, or Clifford algebras to come up with a language that then eventually will become acceptable for my employer - it'll never happen. Thus, I greatly prefer the library approach. Unfortunately, while this might be an option with a well-designed macro system, it isn't with the mess of mainstream OO languages.

All that remains for me is to do the fun part, R&D and prototyping, in languages such as Mathematica, and then do it the hard way, probably with code generation, in C and C++.

### Why would that never happen?

Why would that never happen? I think it's because of the 'small is beautiful' thinking, many language designers have and I ask myself, if this thinking is maybe flawed and eventually the reason for the lack of a really useful programming language after years of research in this area.

And if it's possible to have world wide collaboration via the internet in the area of library design, why not in the area of language design? If the language designer(s) present the first prototype and ask for ideas and critique, most problems should become obvious very fast and could prevented before the first release of the language.

I also don't think, that languages should be created for eternity. Why should 'revision 2' be 100% source compatible with 'revision 1' of a language? Why not simply write a converter which translates it automatically (should be simple, if you create 'r2' of the language with that in mind) into the new version? That would free the language designers from lots of compatiblity concerns and would serve the evolution of the language.

### First Class Everything?

As a user of PLs (not a designer), I appreciate PLs that are extensible, especially ones that make the code I write appear to be first class within the language. I would rather not have to deal with a kitchen sink PL that has all sorts of complexity in the base syntax. I'd rather have standard libraries that integrate seamlessly with the syntax of the language - making the libraries appear to be first class. I consider all the code I write to be kind of an act of writing libraries - even if I'm the only one that will ever use them as such.

Now one can lament that not building these features into the base syntax results in a plethora of libraries, non-standard implementations, etc... But is that the fault of the PL? Or is it the fault of the users not coming together to agree on a standard set of API's? Sure, there's less people involved in the design of the PL, so it's theoretically possible for the PL designers to mandate by fiat what features get integrated into the language. But when you push stuff out of the libraries and back into the base syntax, you have to get many more people involved in the process (thus killing the advantage of a small group). It's not by accident that languages with complex syntax like Ada are designed via committees. And even after you've thrown all this stuff at the PL, you still have to establish massive libraries to do the chores that the PL hasn't got a prayer of integrating.

As Stroustrup has said in the past, there is a certain amount of complexity that you can not resolve simply. You can either build it in the PL or push the problem to the libraries, but that complexity still exists. Perhaps others agree that having that complexity in the PL is the best balance, but I tend to side the other way. Of course, the bigger issue is that PL design and Library design are interrelated issues (as alluded to by others in this thread). Trying to make it a PL only issue just rearranges the chairs on the deck, but doesn't really solve the fundamental issues that PL design and Library design are hard. Even harder is getting everyone to get on the same page,

### Why should your code appear

Why should your code appear 'first class'? It never is (ok, besides maybe in Lisp). And why do you prefer complexities in the libs to complexities in the language? I think that a huge part of the usual complexities in most libs result from limitations of the underlying language. By integrating them into the language you could probably avoid lots of those complexities.

Now one can lament that not building these features into the base syntax results in a plethora of libraries, non-standard implementations, etc... But is that the fault of the PL?

It doesn't matter, which fault it is. It's the result that counts. Sure, having nice standard-libs will help too, but as someone who does lots of work in Java, I know the limitations of the library approach (and it's still getting worse). Sometimes you have to do really hair-raising stuff to accomplish relativly easy things. Sure, maybe that's all Java's fault, but I've seen similar things in other languages.

### Just my opinion

Why should your code appear 'first class'? It never is (ok, besides maybe in Lisp).

My stake in code that I work with has the most impact on me. Sure, it's a selfish POV, but ultimately it's about getting things done. Or to cite guy steele from the link at the bottom of this post:
Every time you write a new function, a new method, and give it a name, you have invented a new word. If you write a library for a new application area, then the methods in that library are a collection of related words, a new technical jargon for that application domain. Look at the Collection API: it adds new words (or new meanings for words) such as "add", "remove", "contains", "Set", "List", and "LinkedHashSet". With that API added to Java, you have a bigger vocabulary, a richer set of concepts to work with.

And why do you prefer complexities in the libs to complexities in the language?

Because the complexities introduced in the language tend to impact all code written for all solutions. Whereas the complexity in the libraries can be more localized.
I think that a huge part of the usual complexities in most libs result from limitations of the underlying language.
I always view these things as a tradeoff. You can make certain patterns simpler by integrating them in the language, at the cost of making the base language more complex and possibly fragile.
By integrating them into the language you could probably avoid lots of those complexities.
Certain things can be made easier, but it remains to be seen whether it amounts to a zero sum game.
Sure, having nice standard-libs will help too, but as someone who does lots of work in Java, I know the limitations of the library approach.
The question is whether simplicity, in the elegance sense, can make the construction of libraries less complex to build and to use. I find that complex syntax does not necessarily promote making better libraries. But I realize that this is just an opinion. I don't think that Java would necessarily be a better language by just stacking more syntactical elements on top of it. These additions have an interaction with Java syntax that has many historical problems. The drive for simplicity in PL design is as much about building a language that can grow.

### every language needs libraries, but...

Because the complexities introduced in the language tend to impact all code written for all solutions. Whereas the complexity in the libraries can be more localized.

Thats only true for high-level libs but not for the more basic stuff.

Its obvious, that every language needs some kind of libraries, a language which won't would solve every problem in a single line of code. So libraries are a must and of course some methods of extensibility.

I'm simply questioning the goal to create languages which are able to do as much as possible via the libraries, compromising expressivity and usablitity for it.

A perfect example for how this went wrong are Java generics. In practice you need them only for (somehow) typesafe collection classes. But look at the complexity they added to the language. I suspect if Java had full scale collections directly integrated in the language, nobody would have thought about creating generics (not that I'm against generics in general, but in Java it went very wrong).

It's all about overhead: If you have to use a certain feature very often (as for most of the basic stuff), then it's usage should be as easy as possible. If you use it only seldom (like most high-level stuff) the overhead is not so important. Because of this a language with a rich build-in featureset can affort a more cumbersome lib usage because those usage is happening relativly seldom. And because of this the language don't need complex mechanisms to do integration of features via libraries.

Things like "list.get(1)" is much more annoying compared to "list[1]" because you need it very often. But if you create a web-service via a lib-function, it doesn't matter if you need even an additional line of code. Of course you can use operator overloading instead, but that would create other kinds of problems. And operator overloading is really only usefull for those 'basic stuff' which could simply be part of the base language.

### Simplicity is many times the answer

This particular example is a good indication of where simplicity is the answer. The immediate problem is that you want to have an index accessor to a list. So the knee jerk reaction is to add special syntax for lists that translates list[1] into list.get(1). But this misses the opportunity for better abstraction. The index accessor is not just useful for lists, but rather for all collections that have some semblance of ordering (arrays, sets, trees, ...). So the answer is not to complicate the syntax just for the special case of lists. Rather, the language must simplify to a common pattern of syntactical sugar for all collections in all the current and future data structures. Not only that, I really want these collections to be treated as iterable in a foreach type of construct.

So the drive to simplicity is not necessarily aimed at cutting out syntactic shortcuts. Rather it is aimed at finding the essence of the abstraction. Because the abstraction can be made to cover more than just lists, the drive to simplicity means that the language should actually gain in expressitivity.

What I don't want is want set of sugar that applies to lists, another to arrays, another to trees, ..... Find something that decouples the syntax from the type of data structure being manipulated, and you have a generic framework that can be applied to all manner of problems.

Anyhow, that's the theory behind the drive to find common patterns. Now, sometimes it'd be nice if I had special syntax for things like handling XML or Strings or Databases or.... But as often as not, adding things to a language causes as much grief as it solves. I'm nt sure why you cite Java Generics as that the kind of thing that happens when you start stacking more stuff on top of an existing language. Well, perhaps we could redesign Java from scratch to make the generics blend more seamlessly with the historical baggage of Java. But then perhaps the real solution to Java is to return to square one and try to make it simpler.

One other thing I should mention, while I'm wasting bits, is that there are many different languages being designed for many different purposes. The suggestions you make would be totally against the philosophy of some languages like Standard ML which has as it's goal rigor (it's probably one of the most conservative languages). However, there are languages being developed which are much more open - the design of the Fortress language is probably more to your liking, though it is aimed at massive scientific computing. Not all languages are designed to solve every known problem in the universe - but there are some that pursue the goal of hegemony.

### I also see the goal of

I also see the goal in creating simplicity by finding the 'essence' of abstractions. The problem is simply that thats much more difficult if you have to implement it in a library than build it into the language itself.

For example with collections: One possibility to simplify them is to only use a single type of collection. And the compiler creates the 'real' collection based on the usage pattern. If you access the elements always in order and do random inserts, it uses a linked list. If you do random access, it uses a array. If you search for elements, it uses a hash or a sorted list with binary search. To specify additional constraints or hint your expected usage patterns, you can define additional attributes, like 'use it as a map with string-keys', 'use it as a set' or 'use a fixed size and a lru algorithm to free space if required'.

But collections are just one example. What about creating an 'observer' by simply using a 'observe' declaration and everything else is created by the compiler. Same for serialisation, database access etc. Or look in the field of guis: You want to edit a datatype? Simply add some attributes to the definition and the compiler creates a gui-dialog to edit this datatype if you want to. And because all those features are combinable, you can use it with those fancy collections, database access etc.

All those is also possible in minimal languages, but often only with lots of metaprogramming, use of reflection, macros etc. Integrating them right into the language would be in most cases much more simple and the result would be better. And if you design all those 'from scratch' but with the knowledge of the wanted features, those features would be better integrated than the work of different programmers who have created lots of libraries. And those better integration would lead to much mor simplicity, even if the language itself is much 'fatter'.

Java generics are just a case of 'useless' complexity: They are primarily needed to build typesafe collections in Java via libraries. And they are relativly complex. If they had instead build all those standard collections into the language that would be more easy to implement and the result would be much more easy to use too.

### One advantage of macros that

One advantage of macros that you've missed: it's possible to control which macros are in scope and thus which new 'language' features are available - useful if some of them conflict. If your monolithic language can do this, it becomes the weak cousin of a minimal language with macros that happens to include all the monolithic language's features in its libraries...

### Suppose...

... that rather than defining a language and writing a standard library, and worrying about which content should go into each, one were to create a "minimal" language for describing languages. The language itself could be small, while the expressions one could create with it would be unlimited. You ask "Why doesn't a language have special syntax for GUIs, SQL, etc.?", but I say the real question is "Why isn't there a language that let's me write a special syntax for my task du jour?".

If you want special syntax for common tasks, I think what you really want is more powerful macros. In a world where macros can fundamentally extend the language implementation itself, the dividing line between language and library ceases to exist.

Karsten_w,

This discussion is following a pattern we've seen repeated on multiple occasions recently, which was described in the recent blog not forum post as "ungrounded discussion".

You've posted many comments clarifying or expanding on your perspective, and have received a lot of feedback. This would be a good time to pause and write up your ideas in a more considered fashion, taking into account the feedback you've received. When you're done, post it on your own web site or blog, and post a link to it in this thread. Discussion can then proceed with a firmer foundation.

### Some of my experiences

I've done a lot of programming in a 4GL language (which was big, over 1000 keywords) and one feature I liked was schema checking the code against a database. That way it is fairly easy to do an impact analyses when making a schema change. Just make the schema change, compile all modules and those that don't compile are definately no longer valid (Nice feature if your application consists of a couple of thousand modules and your database has a few hundred tables). When programming in general purpose (minimal) languages I found the addressing of databases cumbersome and the lack of schema checking quite a nuisance to be honest.
Apart from that I never longed back to the days of 4GL programming.

New features can only be introduced by means of the language itself, so only the developers of the 4GL are able to extend the language; once you really need something within a certain project you are stuck (which in my book is a disaster).

Built-in features are not easily deprecated, which means that lousy design decisions tend to stick around. Refactoring a library is no small task and deprecating functions/methods is not something to take lightheartedly, but it is being done and sometimes appropriately. Within a big language new constructs are added, but badly designed old constructs stay forever for backwards compatibility reasons. That way the old constructs will never die out, for people will keep using them.

Namespace bloating is horrible, leading to long names, resuling in very verbose code.

Genaral stuff like control flow, module system etc. did in the case of this 4GL not get the attention needed. Most of the attention went into adding new features (XML, sockets, message queuing, COM, Corba etc.), while the basic language remained too spartan to be enjoyable.

The language becomes brittle. Once a feature contains a bug it can cause trouble all over the place. If a library contains a bug, don't include it and the pain is not more than missing the functionality of that one library.

Language style can easily become inconsistent. This might sound counter-intuitive, but in my experience libraries in eg. Java, C, Haskell have a more consistent style across all standard libraries than this particular 4GL had. New constructs were added in an ad hoc fashion and in order not to confuse old time programmers who didn't need newer features, hence didn't want to know about it, new features were often different in style from older constructs.

### PHP as well

PHP is also a kitchen sink style language. Actually, it seems to do most of what the original poster suggested - it's developed by a community, has just about everything built in, and makes incompatible changes with every release.

PHP's size creates issues that weren't present when say perl was the major cgi language. Because most of its features seem to be added directly to the interpreter rather than as runtime loadable modules, the base interpreter has to be recompiled to add new features, which isn't desirable as a hosting provider, or as an independent developer.

Aside from deployment issues, the languages naming conventions are also all over the place. Some functions use '_' others use studlyCaps to separate words. This adds mental overhead for developers to the point that most only code with the specs a hand.

PHP would seem to be a solid counter example for this design methodology.

### I've done a lot of

I've done a lot of programming in a 4GL language (which was big, over 1000 keywords) and one feature I liked was schema checking the code against a database. That way it is fairly easy to do an impact analyses when making a schema change. Just make the schema change, compile all modules and those that don't compile are definately no longer valid (Nice feature if your application consists of a couple of thousand modules and your database has a few hundred tables). When programming in general purpose (minimal) languages I found the addressing of databases cumbersome and the lack of schema checking quite a nuisance to be honest.

It's possible to do this in the database library given a sufficiently advanced type system. It could also be done, in a more ugly manner, using a macro system. Think dependant types.

### Some clarifications

Since the discussion really went a bit overboard, I will try to clarify my ideas a bit.

I'm not really a theorist, I consider programming languages as tools to get some kind of job done. The job (writing big software systems) is very complicated because the complexity of the system you have to build. Because of this I view programming languages not only from the viewpoint of their theoretical beauty, but also from the viewpoint of their practical usefulness. And I think that lots of very interesting approaches simply failed to deliver the latter. I've looked at lots of programming languages, tried to work with some of them - but I'm still doing my work primarily in Java. And I asked myself often enough: Why, oh why? I think the primary reason is practicability. While Java is relatively ugly, it simply is a language which lets you get your work done. Not in a particularly elegant, clever or even beautiful way, but with all those libraries and a good IDE you can be really productive.

But do I really like Java? No. I would always prefer a more elegant and clever language. But why are those elegant languages so difficult to use to do real-world work? Is it only my stupidity? Or is it (at least to some part) a problem of the language?

I've thought a lot about this topic and one idea I've stumbled upon is that those really brilliant language designers who are able to create a language like Haskell approach the topic from a very different angle as a practitioner who want a language to get work done and not as a research tool/object. If you try to build a work of art you simply don't consider usefulness. And the result is the programming language equivalent of a super hot design chair which look awesome â€“ but is really uncomfortable if you try to sit in it.

The trend with those elegant languages seem to go in the direction of creating meta languages: Languages to create a real languages in. Look at the Let's make a programming language!-thread: Lots of proposals were simply to creating languages which can be used to create programming languages. And look at Lisp which got lots and lots of admiration because of its macro system which enables the creation of new languages constructs direct in the language.

But is this approach really useful in practice? Language design isn't easy, even for people who put lots of effort in doing it. I've designed around 15 languages over the last 20 years and until now none was good enough to publish. Maybe it's again my stupidity, but I think it's also a very hard job if you really want to create something good and useful. And if its so, why should the creation of a language creation language be able to change this? If parser generators would make language design easier, since invention of tools like yacc there should be an abundance of good programming languages on the market. But where are they? Why have I still to use something like Java? The answer is: Tools don't create good languages, good language designers do.

But if this is true (and I'm sure it is), who are the people who would use all those fabulous meta languages to create real languages, languages which enables the common programmer to get his work done? The common programmers them selfs? They simply can't do it, because it's a hard task and they have neither the time nor the knowledge to do it.

I think thats the fallacy lots of modern languages fell into: Thinking that the creation of a tool to build good abstractions would let those 'common' people really create those abstractions. But they won't because they're unable to. And because of this, a language like Java, which is easy to use and actually prevents the creation of new abstractions (because of its limitations) is so useful: People stay in the path the languages provides and thus have some kind of guide to prevents them from doing 'stupid' things. This lead to lots of useful tools and libraries and in the end to a much better productivity as you would expect if you look at the limitations of the language.

How does this relate to the initial mentioned 'fat' languages I asked about? My idea: If a language designer don't creates a meta language but instead a language which is directly useful without requiring the creation of a 'real' language first, it will automatically be such a 'fat' language, because it needs lots of things build in.

But is it really necessary? Why not build a minimal language and also a comprehensive library who can be used to get the work done? Maybe this would work too, but I see two problems:

First: It's more difficult. Creating a meta language plus a real language (in the libs) is simply more work then creating only the language itself. And because of using a meta language (with additional limitations) and not a standard compiler approach, the task will get even more difficult and the resulting language will inherit some of the limitations of the meta language. And because of this additional complexity the resulting language would need more time and will be worse then necessary.

And second: Who prevents all those bad language designers to create their own libraries? Sure, nobody force you to use those bad 'languages' (and yes, they are languages, even if they are implemented as libraries), but what if the 'bad designer' is part of the team who created the language and his work is part of the standard libs (he wasn't good enougth to create the language itself so he had to work on the libs). Or if using his work is simply necessary because there is no alternative implementation? Or if you're forced by a customer to use his work because of compatibility or political concerns? If its possible, there will be lots of cases where the common programmer has no choice but to suffer. I know that this kind of thinking really goes against the belief of many programmers that freedom is always a good thing. But freedom can be misused, and even if you are able to use it wisely, the guy who sells your boss a new framework or the colleagues who works at the same project as you do maybe aren't. And freedom of choice always requires that someone chooses. But every choice can be the wrong one and even if it's a good one, it always takes some time and effort to find it - time one can also use to get work done.

The only way I can imagine right now to prevent those problems is to prevent people to build those bad languages by giving them a good, useful and complete language instead. Sure, extensibility is unavoidable, but the more features you have build right into the language, the more you can afford less elegant means of extensibility.

Of course the next problem is how to avoid a complexity explosion many posters seem to fear if you make a very complex language. But I'm still not sure, why those complexity should really be unavoidable. Why should only the limitations of the 'meta language' (which is then used to create a real usable language 'by library') prevent the explosion of complexity? I really see no inevitability here.

Yes, it's true, there are lots of really bad examples of 'fat' or' monolithic' languages. But I think that the good language designers are simply to interested in creating 'works of art' that they lost sight of the real goal ('Build a language which let's you get work done.'). And the relatively bad ones who aren't able to create those really elegant languages are left to create 'workhorse' languages â€“ and that shows. And then there is the 'Cobol trauma': If someone talks about 'fat languages', everybody thinks of Cobol and goes into some kind of defensive mode.

I won't write here about my ideas on how to create a 'good, fat language'. No, I'm not thinking about a 'kitchen sink' language in the tradition of 4th GL languages or even Cobol. I'm thinking about using better abstractions, but not abstractions to abstract the creation of other abstractions, but abstractions to directly create applications.

...is that when someone finally gets around to creating the perfect PL, the language likely won't require programmers - so we'll all be out of jobs. :-)

I personally think you're operating under several logical fallacies like (a) everybody that uses PLs has the same job description, solving the same sorts of problems with the same considerations; (b) that theoreticians should be juxtaposed against practitioners - i.e. that simplicity is in conflict with pragmatics; (c) that all these syntactical features can seamlessly operate together in an ueber language and make the life of the average programmer easier; .......

From what I gather, LtU Administration thinks you should shell out these ideas offline, as they are mostly open-ended - of the ineffable variety - i.e. why can't I have a programming language that does everything I want it to do? Answers to such questions are mostly matters of opinion, so they are going to drive conversations in directions that are not necessarily productive. It's like starting a thread that discusses everything about PLs in general, but ends up being a discussion about nothing in particular.

### Agreed

Thanks Chris, you nailed it in every respect.

If there are regular members who disagree with this editorial decision, then please post about it in the blog not forum thread. A major purpose of that thread was precisely to ask the question: do regular, long-time members actually get anything from threads like this one? There didn't seem to be much disagreement with the proposed policy of "avoiding ungrounded discussions", which is the main objection in this case.

I'll say one other thing specifically related to this thread: I'm sure many, many LtU readers have, or have had, similar feelings to karsten_w about the various shortcomings of programming languages. I think we can take it as a given that we all understand and perhaps empathize with those kind of feelings. However, it's one thing to recognize that situation, and quite another to claim that you've found the problem, or see a way to a solution. If you're going to make such a claim, then it should meet a higher standard than usual, in terms of explaining the thesis clearly, and as concretely as possible, and demonstrating a good understanding of the existing PL landscape. Otherwise, as Chris said, all you have is "a discussion about nothing in particular", at best.

### I maybe don't really

I maybe don't really understand than what the purpose of this blog is. Only list new papers from researchers? Or only talk about a certain very concrete features of a certain very concrete programming language or type system? If thats true, then this topic is really of topic here (like lots of other topics I've read here in the past and maybe I got a wrong impression out of them, because I always lurk for some time, before I post somewere).

Please put those rules somewhere on the FAQ-page then, I really hate being of-topic. In the moment I just still don't see the point or rule in the FAQ I've violated with this topic.

I have to add, that the reply Chris Rathman is quite unfounded and results from drawing conclusions which are simply not drawable out of my last posting without heavily over-interpreting it. I (a) nowhere promoted the idea of having only a single programming language, (b) that theoreticians should juxtaposed to practitioners (in fact I wrote the opposite) and (c) that I want to create a 'Ã¼bersprache' and don't see the potential problems of combining features. I'm really a bit surprised that you consider a reply of this quality adequate to your high standards.

### Stream-of-Consciousness

I have to add, that the reply Chris Rathman is quite unfounded and results from drawing conclusions which are simply not drawable out of my last posting without heavily over-interpreting it.

I don't think Anton was agreeing or disagreeing or even commenting on the first part of my post.

### Stream-of-Consciousness II

Please put those rules somewhere on the FAQ-page then, I really hate being of-topic. In the moment I just still don't see the point or rule in the FAQ I've violated with this topic.

First rule of thumb is that when Anton puts on his Admin Hat, the courteous thing is to acknowledge the suggestion - not go and post a long stream-of-consciousness on this board that seems to be exactly opposite of what was requested.

. I'm really a bit surprised that you consider a reply of this quality adequate to your high standards.

Granted, my responses in this particular forum topic have not been well thought out - in part because when I respond to completely open-ended questions of PL philosophy, I have my own grudges that I carry around. When a topic is prone to meander, I start discussing my own half-baked ideas.

### Rules of civilized conversation

It might be nice if all aspects of civilized society could be codified as rules, but this isn't the case. While we repeatedly stated that we are working on a more complete set of policies, it should still be clear that if an administrator (Anton) and a one of the founding father of the site (Chris) comment that your posts are not appropriate, the civil thing to do is to think again, try to clarify or stop posting - not to complain about rules. I direct your attention to this thread which is linked from the FAQ which - after explaining the communal aspects of LtU - includes the following:

Finally let me end by reminding everyone that posting here isn't a right, it's a privilege. The LtU community, even though ad hoc and without institutions, is strong enough a community to maintain the atmosphere of polite and sincere discussion we are used to having. However, in cases where it proves necessary, we will remove topics and posters that undermine this goal.

I will not go over everything you posted to justify why I feel you are not sincerely trying to engage in civilized and professional discussion. I think others pointed some of the main problems already, but you didn't pause to reflect after reading their polite requests.

I hope that this blunt message will do the trick, since it seems you have a genuine interest in programming languages. If this is the case, please review the responses you received already, and take them to heart. In the hope that this will happen, I am not removing your posting privileges at this point.

### I tried to clarify

I tried to clarify (the 'civil thing' you mentioned), but that is obviously not wanted or even disregarded rudely as 'stream of consciousness' here.

I suggest to remove (or rename) the point 'discussion' at the right menu of the site, because I have really no clue how to discuss something if it's not allow to make clarifications if a reply shows that there was some kind of misunderstanding or poses new questions.

In the future I will refrain from posting here because it's simply useless under a imperative which only allows for 'sniper comments' and I will use this blog solely as a news site with links to interesting papers.

This is my last post here, so please remove my account or even this whole thread, I see no reason to see myself being insulted without the right to answer.

### I apologize for not well thought out criticism...

...as I'm really mixing two things here - (1) my opinions on what you are saying; and (2) my opinions on whether this is appropriate for LtU. I really should've left off the first part before delving in the second part

For the first part, perhaps when I see sentences like "But do I really like Java? No.", I made an incorrect inference?

For the second part, Anton in his admin hat asked you to post this stuff in another venue, and use that as a basis for further LtU discussion. As much as getting mad at me might be justified, I still don't see why the original admin request was ignored?

Please put those rules somewhere on the FAQ-page then, I really hate being of-topic. In the moment I just still don't see the point or rule in the FAQ I've violated with this topic.

For the benefit of other members who may not understand the current policies and the action in this case, I'm going to respond to this.

One of the relevant points on the FAQ page is that "Unfounded generalizations about programming languages are usually frowned on". The entire thesis being developed by karsten_w here has depended primarily on a number of such generalizations, many of which appear to be derived from a fairly limited perspective of the field.

Ten days ago, and a week before the before the current topic was posted, Ehud posted the blog not forum topic, which included the following:

One policy which clearly seems needed is that we should try to avoid ungrounded discussions: discussions in which someone defends an idea that they haven't clearly described, and for which there are no existing references. We should not be playing "twenty questions" with people who haven't taken the trouble to express themselves clearly — it's unproductive, and tends to reduce the quality of discussion. LtU is best used to discuss ideas that were published and argued elsewhere. It is not usually a good place for design discussions and the like.

This thread has been a poster-child for the kind of discussion described by this quote. The general lack of links or references to relevant work is one symptom of that.

However, none of this on its own was sufficient to trigger an admin post, partly because we're still working on developing policies, and we don't want to unfairly single out individuals without a strong reason. In this case, what prompted an admin post was the volume of comments from karsten_w in such a short time period: a volume that put him in a category otherwise occupied almost exclusively by a very few previous problem posters.

In this case, we felt that the volume of comments was symptomatic of a quality issue. Unfortunately, attempting to be gentle about this didn't help. A request to post a more detailed and considered writeup elsewhere was ignored, and what was subsequently posted here was basically more of the same. This is in contrast to the previous two posters who recently received similar admin messages, and took them quite well.

That's how we reached the current point. We certainly share some responsibility here: we don't have explicit enough policies, and lacking explicit policies, we've had too little enforcement. This has resulted in a situation in which some people understandably think that LtU is supposed to be an "anything goes" discussion forum for PL-related matters. I apologize to karsten_w if he received that impression. We've already taken some steps to correct that, and we'll be saying more in the very near future.

As always, community feedback on this is welcome. For the moment, any feedback not specific to the current thread should be posted in the "blog not forum" thread linked above.

### Because it's easier to change libraries than languages

Anything that might change is better off in a library than in the language. Libraries are easy to change or replace, whereas languages can usually only be extended.

For example, the original Java GUI library was AWT. Now it's Swing. Could the change have occurred if it had been a language feature rather than a library?

I think even standard libraries are dangerous. The C++ standard library contains some parts which are ill thought-out (not just my opinion - Josuttis says so in his book) - but we're probably stuck with them now. The numeric classes in the Haskell Prelude make it harder to write maths code than it ought to be - but because it's automatically loaded, it's hard to work around.