General

What needs to be done?

So suppose like many here (?) you are a believer in the promise of functional programming. And suppose more and more FP features and FP languages are becoming mainstream. Does this mean nothing remains to be done? Certainly not! Among the recurring topics that come up for discussion here are support for concurrency, utilizing GPUs, FRP, dependent typing, fine grained control of effects, native support for web programming and the web stack of technologies and protocols and more.

So what would be your priority list? What would be the first article/web site you'd want the LtU to pay attention to, in order to see the importance of what you deem most important for the future of PLs? Who would be your ideal guest for the proverbial dinner? And what would you ask him (or her!)?

Added clarification:: As can be understood from the discussion below, this message has nothing to do with FP in particular. That's just the teaser. The questions in the second paragraph are entirely general.

the gnu extension language

I found this to be an entertaining and interesting read: the gnu extension language, by Andy Wingo, maintainer of Guile.

Guile is the GNU extension language. This is the case because Richard Stallman said so, 17 years ago. Beyond being a smart guy, Richard is powerfully eloquent: his "let there be Guile" proclamation was sufficient to activate the existing efforts to give GNU a good extension language. These disparate efforts became a piece of software, a community of hackers and users, and an idea in peoples' heads, on their lips, and at their fingertips.

The two features of Guile he highlights are macros ("With most languages, either you have pattern matching, because Joe Armstrong put it there, or you don't") and delimited continuations.

The accompanying slides, The User in the Loop, for the 2011 GNU Hackers Meeting are also noteworthy, because they are not as dry as usual PL fare - instead Wingo revives the spirit of the Portland Pattern Repository:

  • "Thesis: Some places just feel right"
  • "Architectural patterns help produce that feeling"
  • "E11y [extensibility] is fundamental to human agency and happiness"
  • "Moglen: ‘Software is the steel of the 21st century’"
  • "Building Materials: Le Corbusier's concrete; GNU's C"

[Update: Video now available.]

Guidance to avoiding vulnerabilities in programming languages (ISO/IEC 24772)

I don't recall a discussion here on ISO/IEC TR 24772, "Guidance to avoiding vulnerabilities in programming languages through language selection and use." This report describes programming language vulnerabilities in a generic way, and is supported by language specific annexes.

An introduction to this report can be found on page 46 of this issue of the Ada User Journal (how unweb like!). A lengthier discussion can be found in this issue of the same publication.

Rob Pike: Public Static Void

Rob Pike's talk about the motivation for Go is rather fun, but doesn't really break new ground. Most of what he says have been said here many times, from the critic of the verbosity of C++ and Java to the skepticism about dynamic typing. Some small details are perhaps worth arguing with, but in general Pike is one of the good guys -- it's all motherhood and apple pie.

So why mention this at all (especially since it is not even breaking news)? Well, what caught my attention was the brief reconstruction of history the Pike presents. While he is perfectly honest about not being interested in history, and merely giving his personal impressions, the description is typical. What bugs me, particularly given the context of this talk, is that the history it totally sanitized. It's the "history of ideas" in the bad sense of the term -- nothing about interests (commercial and otherwise), incentives, marketing, social power, path dependence, any thing. Since we had a few discussions recently about historiography of the field, I thought I'd bring this up (the point is not to target Pike's talk in particular).

Now, when you think about Java, for example, it is very clear that the language didn't simply take over because of the reasons Pike marshals. Adoption is itself a process, and one that is worth thinking about. More to the point, I think, is that Java was (a) energetically marketed; and (b) was essentially a commercial venture, aimed at furthering the interests of a company (that is no longer with us...) Somehow I think all this is directly relevant to Go. But of course, it is hard to see Go gaining the success of Java.

All this is to say that history is not just "we had a language that did x well, but not y, so we came up with a new language, that did y but z only marginally, so now we are building Go (which compiles real fast, you know) etc. etc."

Or put differently, those who do not know history are doomed to repeat it (or some variation of this cliche that is more authentic). Or does this not hold when it comes to PLs?

A Larger Decidable Semiunification Problem

In A Larger Decidable Semiunification Problem(2007), Brad Lushman and Gordon V. Cormack of University of Waterloo

[...] present a graph-theoretic framework in which to study instances of the semiunification problem (SUP), which is known to be undecidable, but has several known and important decidable subsets. One such subset, the acyclic semiunification problem (ASUP), has proved useful in the study of polymorphic type inference. We present graph-theoretic criteria in our framework that exactly characterize the ASUP acyclicity constraint. We then use our framework to find a decidable subset of SUP (which we call R-ASUP), which has a more natural description than ASUP, and strictly contains it.

Kona

Kona is a new open-source implementation of Arthur Whitney's K, an ASCII-based APL like language. Kona is a fully working version of K3.

If you haven't ever tried APL/J/K or ilk you might find this language incomprehensible at first -- unless you like a challenge! Watch the screencasts or read some of our earlier APL/J stories.

Regardless of your interest in K, any LtUer worth his salt will enjoy the source code. We wrote a bit about the history of the remarkable C coding style used in the past, but I can't locate the link at the moment.

Interview With Albert Gräf - Author of the Pure Programming Language

Albert Gräf is the author of the Pure programming language. Pure is a functional programming language based on term rewriting. It has a syntax featuring curried function applications, lexical closures and equational definitions with pattern matching, and thus is somewhat similar to languages of the Haskell and ML variety. Pure is also a dynamic language, and is more like Lisp in this respect. The interpreter has an LLVM backend that does JIT compilation.

Part 1 and Part 2

Keyword and Optional Arguments in PLT Scheme

Keyword and Optional Arguments in PLT Scheme, Matthew Flatt and Eli Barzilay, 2009 Workshop on Scheme and Functional Programming.

The lambda and procedure-application forms in PLT Scheme support arguments that are tagged with keywords, instead of identified by position, as well as optional arguments with default values. Unlike previous keyword-argument systems for Scheme, a keyword is not self-quoting as an expression, and keyword arguments use a different calling convention than non-keyword arguments. Consequently, a keyword serves more reliably (e.g., in terms of error reporting) as a lightweight syntactic delimiter on procedure arguments. Our design requires no changes to the PLT Scheme core compiler, because lambda and application forms that support keywords are implemented by macros over conventional core forms that lack keyword support.

As usual, a solid paper by the PLTers, this time on flexible argument passing. Making named arguments apparent at compile-time (by introducing keyword symbols that may not be used as ordinary values) seems right and enables some optimizations. There are also some nice Racket-specifics in there, such as the use of the customizable application form #%app, which - together with "applicable structure types" - allows the implementation of named arguments in "userland". The paper is rounded out by a performance evaluation and a description of similar facilities in other languages.

I think this is a very good design (and implementation technique) for named arguments. A facility [edit: syntax support] for receiving all named arguments of a function seems to be missing though - but it can probably be added in userland, too.

Invertible Syntax Descriptions: Unifying Parsing and Pretty Printing

In Invertible Syntax Descriptions: Unifying Parsing and Pretty Printing, Rendel Tillmann and Klaus Ostermann at the University of Marburg, Germany apply the "don't repeat yourself" principle to parsers and pretty printers.

Parsers and pretty-printers for a language are often quite similar, yet both are typically implemented separately, leading to redundancy and potential inconsistency. We propose a new interface of syntactic descriptions, with which both parser and pretty-printer can be described as a single program. Whether a syntactic description is used as a parser or as a pretty-printer is determined by the implementation of the interface. Syntactic descriptions enable programmers to describe the connection between concrete and abstract syntax once and for all, and use these descriptions for parsing or pretty-printing as needed. We also discuss the generalization of our programming technique towards an algebra of partial isomorphisms.

Language Virtualization for Heterogeneous Parallel Computing

Hassan Chafi, Zach DeVito, Adriaan Moors, Tiark Rompf, Arvind Sujeeth, Pat Hanrahan, Martin Odersky, and Kunle Olukotun describe an approach to parallel DSLs that is a hybrid between external DSLs and internal DSLs in Language Virtualization for Heterogeneous Parallel Computing.

As heterogeneous parallel systems become dominant, application developers are being forced to turn to an incompatible mix of low level programming models (e.g. OpenMP, MPI, CUDA, OpenCL). However, these models do little to shield developers from the difficult problems of parallelization, data decomposition and machine-specific details. Ordinary programmers are having a difficult time using these programming models effectively. To provide a programming model that addresses the productivity and performance requirements for the average programmer, we explore a domain-specific approach to heterogeneous parallel programming.

We propose language virtualization as a new principle that enables the construction of highly efficient parallel domain specific languages that are embedded in a common host language. We define criteria for language virtualization and present techniques to achieve them.We present two concrete case studies of domain-specific languages that are implemented using our virtualization approach.

While the motivation of the paper is parallelization the proposed design looks like LINQ expression trees dialed to 11.

XML feed