Lambda the Ultimate

inactiveTopic Min-Maxing languages
started 3/16/2004; 10:11:25 PM - last post 3/21/2004; 9:36:11 AM
M.J. Stahl - Min-Maxing languages  blueArrow
3/16/2004; 10:11:25 PM (reads: 264, responses: 5)
With any software product, espicially those that the developer has an emotional attactment to, 'feature creep' is a problem. And given that any language can and should be considered a software product, this problem can and usually persists.

I have made mention before that I am working on developing a new language to fill in gaps that have persisted in other languages that I used. After over 12 hours of working on the abstract machine today, I decided to sit back and think about all the fun things I wanted to put into this language. This fun didn't last long as the notes I made concerning features increased in length as each moment passed. So I took a break and decided to think about more important things, feature focus and simplicity.

These thoughts (on feature focus and simplicity) were inspired by a recent posting on The url for the article is

I wanted to extend this personal discussion to everyone else here on Lambda, by asking two questions.

When birthing a new language, what are the minimum features (which could be functions, and variable assignment, etc) that you decide upon in order to claim the language is complete? And how do you decide upon these features over the others that you may have on your list?

Martijn Vermaat - Re: Min-Maxing languages  blueArrow
3/17/2004; 2:16:52 AM (reads: 249, responses: 0)
I think it was mentioned here not too long ago that it was Niclaus Wirth who tried to design languages by leaving things out instead of adding new features.

I don't know how he thinks about this now, but there are certainly two camps here. Some find it elegant to have a language as simple as possible (well, maybe a little beyond the Turing complete requirement ;)). Think Lisp, maybe Java in a certain way. Most purists and academics seem to be in this camp. Of course, I might be wrong.

On the other hand, we have languages like Perl and Scala, of which the very existence is justified by the wish to offer a programmer more than he strictly needs. I guess these languages tend to be more popular, but there might be plenty of counter examples.

What side you choose (if not sitting in between), depends on personal taste, application domain and userbase. Some choose to have a very simple but 'complete' core system and build additional language features in the language itself (this is done with Lisp, it is done to some extend in Scala). As for the question 'what is complete in this context?' I think some decisions are important (maybe even without implementation details). How does your type system look, what are first class citizens, how can I express state, strict or lazy evaluation (or both), what primitives are there for flow control, etcetera.

Things like syntax, library functions, and the language logo and colors are important, but could be worked out after you find your core to be complete. Which it not to say none of these will be very important properties of you language, so write down everything you think of.

Frank Atanassow - Re: Min-Maxing languages  blueArrow
3/17/2004; 4:12:42 AM (reads: 248, responses: 1)
MJ: When birthing a new language, what are the minimum features (which could be functions, and variable assignment, etc) that you decide upon in order to claim the language is complete?

First I would ask myself why I want to create a new language, by listing the deficiencies of existing languages and how they contribute to problems writing programs. This presumes a technique for comparing languages. If you want to convince other informed, intelligent people with such a comparison, then you need to pick an objective technique. (If you only want to convince the masses, then you can get by with rhetoric.)

The best objective technique I know for comparing languages is to translate both of them into some universal domain, like the language of mathematics. Such a translation is a called a semantic model. This, IMO, answers your question in the other thread about why people take a `research' perspective on programming languages: it's because we want to compare them in an objective manner.

Once you have a semantic model, if you know a few things about algebra and logic, you can classify them according to various notions of expressive power and talk precisely about what one feature or another contributes or constrains. Examples are things like the definability of higher-order functions, and what class of static guarantees can be expressed by typing.

If I were to design a language, then, I would certainly want to make sure that it meets or exceeds the standards set by existing programming languages. Otherwise, what's the point? I might as well use that existing language. So for example I would make sure that higher-order functions are definable, since there are already many languages that support them. Similarly for first-class continuations, and so on.

IMO, it's very difficult to come up with genuinely new features, that is, ones which are not already definable in one or more of the best languages in each of their respective language paradigms (functional, logic, ...), without a good understanding of techniques for comparing languages. The `best' languages are probably Scheme, Haskell, Oz and a couple others which are roughly on the same level; in these languages you can already define many or all of the features which appear to set apart other languages. For example, there was a paper by Friedman posted here recently which showed how to do OO in Scheme.

Almost without fail, the original contributions of new languages which appear from outside the research sector are not really language contributions, but language implementation contributions. For example, OO may be definable in Scheme, but inefficient. (I don't know if that's the case, but for the sake of argument.) Then you might make an argument that Smalltalk or something is superior, even though I think the fragment of Scheme minus continuations can be assigned essentially the same semantics as Smalltalk. Similar arguments hold for superficial changes in syntax, and choices of primitives.

Martijn: well, maybe a little beyond the Turing complete requirement

A programming language is by definition Turing-complete. Meaningful comparisons of languages are more fine-grained; you need to talk about static and dynamic expressiveness.

Think Lisp, maybe Java in a certain way.

Lisp and Java are just about the least orthogonal languages I can think of. The only thing simple about Lisp is its syntax; I'm astonished that anyone would call it `simple'.

Martijn Vermaat - Re: Min-Maxing languages  blueArrow
3/17/2004; 7:15:08 AM (reads: 239, responses: 0)
Frank: Lisp and Java are just about the least orthogonal languages I can think of. The only thing simple about Lisp is its syntax;

Ok, I was indeed thinking syntax here, so that was a bad example (expecially given my later comment on syntax). (On edit: With Lisp, I was actually refering to Scheme, I guess I should've said so too.)

With Java I did not mean its simplicity in an absolute sence, but some choices for simplicitely in it's design process compared to existing languages in the same domain, notably C++ (lack of operator overloading and multiple inheritence).

If I were to design a language, then, I would certainly want to make sure that it meets or exceeds the standards set by existing programming languages.

Are you sure you will always want to have all available power? I could imagine languages for some domains that don't suffer from lacking a particular feature (as usefull as it may be in other domains). Also, sometimes with two different approaches it is hard to classify one better or worse than the other. (So, how would we define 'exceeds'?) Type systems come to mind here.

So I still think the application domain is very important.

Scott Turner - Re: Min-Maxing languages  blueArrow
3/21/2004; 9:03:02 AM (reads: 142, responses: 0)
Sorry if this is a duplicate ... my first post appears to have failed.

Your process of working on a language to fix the gaps in existing languages strikes a note familiar to me. Similar thoughts occur to me when dealing with languages' shortcomings, and like you, I once started work on a language of my own and reached that early stage of listing all of the things to be "done right". In my case the scope was definitely too large.

The Luke Hohmann interview gives a good perspective on architecture. In terms of the "baby" analogy, the completeness that you look for depends on what your language should grow up to be. To have potential for growth it needs a sound and descriptive type system, and a syntax which is free of ambiguity. If achieving widespread use is a goal, then efficiency/compilability is a major consideration.

Several of the required features correspond to first-order logic. These include records/structs (and), discriminated unions (or), functions (implication), and continuations (not). If types are statically checked, then you need generics (for all quantifier), and abstraction (exists quantifier). Handling effects such as I/O and variables is also necessary, as is integration with libraries written in another language or languages.

IMO recursion and modules are important but could be left out from the "giving birth".

In the long run, a language is defined as much by its basic types and libraries as by the core syntax. You need integers and strings to start with. Will the language be biased toward the speed of native integers, the robustness of unbounded integers, or unbiased? Strings involve even more tradeoffs. If integers and strings are simple , it will leave more options open.

As for how you select features from your list, as the interview says, it's by figuring out the "market" and talking with "customers". I put those in quotes because in my case any "income" that occurs will definitely be in quotes, too. If you are business-oriented, just remove the quotes.

M.J. Stahl - Re: Min-Maxing languages  blueArrow
3/21/2004; 9:36:11 AM (reads: 133, responses: 0)
Many of you are talking about type systems and to be honest I have always been a dynamic typing guy, but that does not mean I am not open to suggestions. I will explain what my current thoughts are and maybe you guys can help me out a little bit.

Color is a message passing object oriented language (where everything is an object) that is based on prototypes and not classes. In fact much of its syntax and semantics are based off of Javascript.

I could define a new object, let's say the number 3.

var x = new Number(3); // or shortly var x = 3;

In Color I wanted take a lot of ideas learned from Self and incorporate them. For example message passing. Assuming that x (a Number object) has a multiple method/slot on it. We could say:

x.mult(4) // or more sweetly x * 4

Here are some of the assumptions I made about typing in Color. If a message containing a the list method 'length' was to be passed to x a search would be begin up the prototype heirarchy, eventually returning some sort of error message stating the slot 'length' was not found. Here the prototype/type of x is a Number, and therefore its behavior is restricted. Would classify as one of the functions of a type system?

Now we can explore something a little bit more dangerous but fun: changing and objects prototype at runtime. I found this functionality in Self and thought it would be neat to play around with. I am still finding it hard to come up with ways that I would use it, but I have thought of one.

// define circ as a Circle Object with radius of 10

var circ = Circle;

circ.radius = 10;

Now on circle there exists three slots defining circ's behavior: diameter, area, and circumference. But I find that I wish to compute the area of a sphere, with the radius of 10, for whatever reason (and this may be bad example). So instead of creating a new variable, why not just throw around circ after giving it some new behavior (because area of a circle is calculated differently than the area of a sphere, obviously). So we changes circ's prototype. There are some semantics things I haven't resolved here, as you will see, but please ignore those, for I haven't completely thought things through.

circ.prototype = Sphere(); // radius slot on circ doesn't change

sph_area = circ.area();

// the we change it back;

I imagine that this causing some major theoretical problems with typing (if I am ever doing typing at all), so I needed a counter measure, a way to freeze the prototypes of object so that I couldn't change them.

Circle circ = new Circle;

circ.radius = 3;

Define circ before with var meant circ was a variant type (much like that in many popular BASICs). Now circ is typed only to hold a Circle object, and in turn its prototype cannot be changed.

I have no idea if I am on the right path or not, because as I said either I have always found type systems to be constrictive, but I have seen some of their good points by listening to other's on here, so nonetheless I am open to typing suggestions.

Can anyone point me to some good literature?