Research vs implementation, theory vs practice, and LtU

There's a conflict between doing research and creating usable programming languages, and different people have different interests.
However, some LtU regulars seem to believe (some variation of) PL research is somehow pointless. (I won't try to offer an exact summary, it's too hard, so this might well be a strawman).

Instead of discussing this inside other topic, it seems this topic deserves its own thread, because the question is interesting. I'm not sure there's something new to say on the topic — this is an old discussion. And I guess that some of us will still not be personally interested in research. But I'd like to not hear the wrong arguments against research. I don't want to hear that research will never be useful in practice. I'm fine with hearing, say, that if you want something now, theoretical research is often not what you should look into — theoretical researchers figure out what you should use in a few years or decades.

I'll reserve my actual answers for the thread.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Types are Actors that can be used in computations

Of course, types are Actors that can be used in computations just like other Actors.

But parametrized types require special treatment and rules.

Types are Actors In ActorScript

I think you mean to say that types are actors in ActorScript. There are many type systems where types are simple types, and values are values, and there are no actors at all.

Dependent types not needed

Some of the above are possible even without dependent types. For example, Rossberg's 1ml unifies functions and functors (type constructors) and allows you to treat list as a value of type type -> type, while desugaring into plain System Fω (heavily utilizing existential quantifiers and higher-kinded types).

As for types like array[float, 3], I'd prefer something like refined types with an automatic theorem prover instead of full-blown dependent types.

Lack of positive feedback

Much has been written about the downsides of gamification, but at least it does add some positive feedback to a discussion. Many of those empty conversations could be:
* Yes, that is very interesting but personally I have no comment to make.
* That's a load of rubbish, I have no interest in that at all.
* I like what is being said but I agree with the viewpoint being expressed so there is no way to join in without a somewhat empty "me too!" +1.
Where, currently there is no real way to tell these states apart.

I agree completely with your observations on the literature at large, and the underlying causes for its current state. I also really like reading the contributions that gasche makes, although frequently I fall into one of two of the states above, and thus don't really add anything.

What kind of mechanism (social or technical) would add some positive feedback to the discussions on LtU without falling into the trap of reinforcing groupthink / gamifying the site?

The context of a discussion

The context of a discussion can make all the difference in how developable it is. The first mention of my fexpr research on LtU (that I know of), a straightforward sharing of the Kernel web page I was maintaining, didn't generate much discussion (though it did result in my becoming aware of, and ultimately joining, LtU :-). From time to time fexprs/Kernel were relevant to discussions and I'd make small comments when they seemed pertinent; but the deep, interesting discussion of the issues involved that I'd have liked to have didn't occur until about three and a half years later, with a topic on first-class environments. So, patience and looking at things from different angles can eventually pay off.

[Note, I'm looking just at what it took for that particular theme to get discussed in depth.]

Silent is fine

To clarify, I think that having no comment to make is perfectly fine (whether one is interested in the work or not), and I am not particularly troubled by the fact that most discussions are very quiet for this likely reason. Having stimulating discussions of the presented work is always great, but we should not expect it to be the default state.

(We could also afford a lot more of "I don't know about X and thus don't understand point Y, can anyone provide extra details?" comments than we have today.)

Absolutely.

… and while I personally could benefit in a few cases from the "don't understand" point, I would consider it a shame if LtU became a PL version of stack exchange.

LtU is intimidating at first, and although that may dissuade the casual browser, it's what makes the place unique. For non-academic members (and possibly for those in academia) it's necessary to read the papers (in my case, sometimes multiple times) to even begin to get a grasp of what's going on, and that's far too much like hard work for the majority of “internautes”. I often find myself thinking "I'm way out of my league here", but try and follow along anyway - the voyage can be as interesting as the final destination.

metaprogramming

I think I forgot it from my laundry list, and it's important to me.

I read an argument the other day that made me rethink, somewhat my objection to simplified languages like types.

It was related to "The Principle of Least Power" - a suggestion that it's good programming practice to code in the least powerful language that expresses the problem.

The part I found myself nodding to was the idea that if the description of the program is simple enough, then you can process it automatically, change it, implement it in different ways. The was a suggestion that powerful and expressive languages are brittle to code-transformation.

It's an argument, I think, not for limited languages so much as for domain specific languages, and for metaprogramming facilities.

The most common example of this might be XML used for user interfaces.

But in most systems, types are there for static analysis, so they're not first class objects and they're not a DSL. So I'm still not really talking about types.

Research and PL

I think the question is well intended, but it's the kind of question that produces a lot of opinionated but uninformed discussion. Even, to some degree, here on LtU.

The first point I want to make, which has been made here already, is that many successful new programming languages - perhaps most - originate from research. C, C++, Java, Basic, Pascal (and yes, that was successful), Objective-C, Smalltalk (and yes, in some domains that was very successful too), Prolog (also successful in some domains), C#, F#, Python, Scala, Lua, Erlang, to name just a few. BitC too, if that ultimately turns out to have uptake. It's hard to make broad judgments without a comprehensive list of languages in front of me.

Subjectively, it has seemed to me that languages designed outside of research have tended to be more ad hoc, and have often suffered from their ad hoc nature. Swift, Rust, Perl, and D come to mind. In some cases the industry collectively has spent very large sums for the consequences of using these languages. This is not universally true. SQL stands out as an industrially originated language, though it built heavily on research antecedents.

I think the focus on Lambda calculus in the original question is a bit off. Coherent languages need some clearly defined semantics. Lambda calculus has been the most mature and best understood, thus the emphasis, but it isn't the only one. Pi calculus is a fine alternative, and may be better suited for OO languages. Lambda calculus per se is an academic construct, but the value of knowing what a program means is not limited to purely academic pursuits. Rigor has value in industry too. BitC cannot be captured by the traditional typed lambda calculus (because of self type for objects), but it clearly needs a rigorous and formally captured semantics.

Again subjectively, it has seemed to me that over the last decade there has been a striking trend in favor of new languages being more rigorous (Swift a notable exception). That seems all to the good.

Finally, I think the line between researcher and industrial practitioner is a false dichotomy more often than people realize. Which side of that line do I fall on? Which side does Guy Steele fall on? How about Don Syme?

It would be hard to call C#

It would be hard to call C# an acedemic language, or , for that matter, C++, F#. They came from companies, often involved some non PhDs, and anyways, were driven in solving problems vs. investigations of the possible. Are bell labs, Microsoft research, Xerox park like universities with acedemic research projects? Not really. Is Anders an acedemic trying to publish papers or an experienced language designer? The answer to that is easy. Was Don Syme doing research when he designed F#, or was he designing a language to fill a need?

Also, foundationally speaking, you don't need clearly defined semantics at the onset, and plenty of languages are used successfully without them. That is not to say they aren't useful, but necessary is a much higher bar! Of course, when we start talking about Rust and BitC that are claiming some extra degree of safety, then it is kind of a different ballgame. But there also has to be value in those semantics, they have to pay off or it is just cargo culting tedium. E.g. What would Swift gain from more formal work? Beyond full employment for PL researchers, what is in it for programmers? Why not invest it better tooling instead?

The rigor fad seems to have started when GLS was put on to do the JLS along with Gilad Bracha. But as far as I can tell, only C# (out of the production languages) has gotten similar attention given work from people in MSR (along with Mads). No one else really has the resources for it, and even Scala lacks much in terms of formal specification. So swift is not really an exception at all, rather it is the same process that new languages start and get big have always followed.

ad hoc-ness and shoulders of giants

I'll question that "usable" is equivalent to "has lots of users", and I'll question that "research" is a value in itself.

Let's go for "lots of users": With the right budget, it might be quite easy to get lots of users: use only tried and tested ideas, keep everything as simple as possible, write documentation, maintain backwards compatibility, practice some form of marketing, hire people to write GUI libraries, run on lots of platforms, provide acceptable performance, fill a gap...

You can see how this recipe over time leads to languages that become less and less usable: as usage increases, new gaps are identified and revisions become necessary... I think it might be rational for a company to evaluate the results of research activity in economical terms, and that will lead to decisions that may or may not be pleasing to researchers and likewise such evaluation may have been too shortsighted and thus important developments are missed or picked up too late by "the" industry.

The whole discussion here is bound to be rendered a bit more obsolete (or imprecise) by the next appearance of some research output in an "applied" language, or the next research paper that solves a problem that was only found in "industry" language X.

There are programmers (and programming situations) where a spec will be preferable to all sorts of tooling, and then there are tools that will make inherent language shortcomings bearable. There is also a lot to be said about free and open source software and how it builds a body of research that one probably would not call academic but it contributes to the body of human knowledge in a very similar way.

GLS, JLS, MSR

Too many acronyms. Needs lowercase.

Guy L. Steele, Java language

Guy L. Steele, Java language specification, Microsoft research.

Great. Thanks.

Great. Thanks.

Typing on tablets leads to

Typing on tablets leads to shortcuts.

Yeah. Makes your posts hard to read

Yeah, I've seen you do it more often. It makes your posts hard to read sometimes, though.

It's summer and China lacks

It's summer and China lacks daylight savings time, I wake up at 4am in the morning and have nothing to do but post to LtU from bed using an iPad.

Homoiconic languages

I have an interest in this.

Mathematica is cool, but I want one that isn't limited to symbolic processing on mathematical expressions, I want other DSLs and meta programming.

I'm familiar with Prolog (which I think is homoiconic in a better way than Lisp and Scheme), but there's a bunch of programming languages I haven't got to yet. REBOL and its open sourced variants and io and maybe pure?