General
Kona is a new open-source implementation of Arthur Whitney's K, an ASCII-based APL like language. Kona is a fully working version of K3.
If you haven't ever tried APL/J/K or ilk you might find this language incomprehensible at first -- unless you like a challenge! Watch the screencasts or read some of our earlier APL/J stories.
Regardless of your interest in K, any LtUer worth his salt will enjoy the source code. We wrote a bit about the history of the remarkable C coding style used in the past, but I can't locate the link at the moment.
Albert Gräf is the author of the Pure programming language. Pure is a functional programming language based on term rewriting. It has a syntax featuring curried function applications, lexical closures and equational definitions with pattern matching, and thus is somewhat similar to languages of the Haskell and ML variety. Pure is also a dynamic language, and is more like Lisp in this respect. The interpreter has an LLVM backend that does JIT compilation.
Part 1 and Part 2
Keyword and Optional Arguments in PLT Scheme, Matthew Flatt and Eli Barzilay, 2009 Workshop on Scheme and Functional Programming.
The lambda and procedure-application forms in PLT Scheme support arguments that are tagged with keywords, instead of identiï¬ed by position, as well as optional arguments with default values. Unlike previous keyword-argument systems for Scheme, a keyword is not self-quoting as an expression, and keyword arguments use a different calling convention than non-keyword arguments. Consequently, a keyword serves more reliably (e.g., in terms of error reporting) as a lightweight syntactic delimiter on procedure arguments. Our design requires no changes to the PLT Scheme core compiler, because lambda and application forms that support keywords are implemented by macros over conventional core forms that lack keyword support.
As usual, a solid paper by the PLTers, this time on flexible argument passing. Making named arguments apparent at compile-time (by introducing keyword symbols that may not be used as ordinary values) seems right and enables some optimizations. There are also some nice Racket-specifics in there, such as the use of the customizable application form #%app, which - together with "applicable structure types" - allows the implementation of named arguments in "userland". The paper is rounded out by a performance evaluation and a description of similar facilities in other languages.
I think this is a very good design (and implementation technique) for named arguments. A facility [edit: syntax support] for receiving all named arguments of a function seems to be missing though - but it can probably be added in userland, too.
In Invertible Syntax Descriptions: Unifying Parsing and Pretty Printing, Rendel Tillmann and Klaus Ostermann at the University of Marburg, Germany apply the "don't repeat yourself" principle to parsers and pretty printers.
Parsers and pretty-printers for a language are often quite similar, yet both are typically implemented separately, leading to redundancy and potential inconsistency. We propose a new interface of syntactic descriptions, with which both parser and pretty-printer can be described as a single program. Whether a syntactic description is used as a parser or as a pretty-printer is determined by the implementation of the interface. Syntactic descriptions enable programmers to describe the connection between concrete and abstract syntax once and for all, and use these descriptions for parsing or pretty-printing as needed. We also discuss the generalization of our programming technique towards an algebra of partial isomorphisms.
Hassan Chaï¬, Zach DeVito, Adriaan Moors, Tiark Rompf, Arvind Sujeeth, Pat Hanrahan, Martin Odersky, and Kunle Olukotun describe an approach to parallel DSLs that is a hybrid between external DSLs and internal DSLs in Language Virtualization for Heterogeneous Parallel Computing.
As heterogeneous parallel systems become dominant, application developers are being forced to turn to an incompatible mix of low level programming models (e.g. OpenMP, MPI, CUDA, OpenCL). However, these models do little to shield developers from the difï¬cult problems of parallelization, data decomposition and machine-speciï¬c details. Ordinary programmers are having a difï¬cult time using these programming models effectively. To provide a programming model that addresses the productivity and performance requirements for the average programmer, we explore a domain-speciï¬c approach to heterogeneous parallel programming.
We propose language virtualization as a new principle that enables the construction of highly efï¬cient parallel domain speciï¬c languages that are embedded in a common host language. We deï¬ne criteria for language virtualization and present techniques to achieve them.We present two concrete case studies of domain-speciï¬c languages that are implemented using our virtualization approach.
While the motivation of the paper is parallelization the proposed design looks like LINQ expression trees dialed to 11.
Some light reading for the holiday season: writing for American Scientest, Brian Hayes says in The Semicolon Wars
A catalog maintained by Bill Kinnersley of the University of Kansas lists about 2,500 programming languages. Another survey, compiled by Diarmuid Piggott, puts the total even higher, at more than 8,500. And keep in mind that whereas human languages have had millennia to evolve and diversify, all the computer languages have sprung up in just 50 years. Even by the more-conservative standards of the Kinnersley count, that means we've been inventing one language a week, on average, ever since Fortran.
For ethnologists, linguistic diversity is a cultural resource to be nurtured and preserved, much like biodiversity. All human languages are valuable; the more the better. That attitude of detached reverence is harder to sustain when it comes to computer languages, which are products of design or engineering rather than evolution. The creators of a new programming language are not just adding variety for its own sake; they are trying to make something demonstrably better. But the very fact that the proliferation of languages goes on and on argues that we still haven't gotten it right. We still don't know the best notation—or even a good-enough notation—for expressing an algorithm or defining a data structure.
The idea of mining code repositories to tease out language design issues have been discussed before on LtU. In this paper, J. Garrett Morris looks at the usage of overlapping instances in the Haskell code repository site hackage on deciding whether to include the feature in a new language.
Using Hackage to Inform Language Design
Abstract
Hackage, an online repository of Haskell applications and libraries, provides a hub for programmers to both release code to and use code from the larger Haskell community. We suggest that Hackage can also serve as a valuable resource for language designers: by providing a large collection of code written by different program- mers and in different styles, it allows language designers to see not just how features could be used theoretically, but how they are (and are not) used in practice.We were able to make such a use of Hackage during the design of the class system for a new Haskell-like programming language. In this paper, we sketch our language design problem, and how we used Hackage to help answer it. We describe our methodology in some detail, including both ways that it was and was not effective, and summarize our results.
Several years ago, a reading group I was in read about the Flux OSKit Project, which aimed to provide a modular basis for operating systems. One of the topics of discussion was the possibility of, and possible benefits of, an application-specific OS. (For example, the fearful spectre of EmacsOS was raised.)
Today, I ran across "Turning down the LAMP: Software specialization for the cloud", which actually makes a pretty strong case for the idea on a virtual machine infrastructure,
...We instead view the cloud as a stable hardware platform, and present a programming framework which permits applications to be constructed to run directly on top of it without intervening software layers. Our prototype (dubbed Mirage) is unashamedly academic; it extends the Objective Caml language with storage extensions and a custom run-time to emit binaries that execute as a guest operating system under Xen. Mirage applications exhibit significant performance speedups for I/O and memory handling versus the same code running under Linux/Xen.
As one example,
Frameworks which currently use (for example) fork(2) on a host to spawn processes would benefit from using cloud management APIs to request resources and eliminate the distinction between cores and hosts.
On the other hand, I suspect that this "unashamedly academic" idea may already be advancing into the commercial arena, if I am correctly reading between the lines of the VMware vFabric tc ServerTM marketing material.
Many a people have looked at Programming Lanugages through the Sapir-Whorf lens so it's not uncommon to find people making PL claims using that hypothesis. Also not surprisingly, the topic keeps re-appearing here on LtU.
This week's NY Times magazine has an article titled Does Your Language Shape How You Think? by Guy Deutscher which starts as a retrospective on Whorf but then goes into what new research has shown.
Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.†This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about.
...
When your language routinely obliges you to specify certain types of information, it forces you to be attentive to certain details in the world and to certain aspects of experience that speakers of other languages may not be required to think about all the time. And since such habits of speech are cultivated from the earliest age, it is only natural that they can settle into habits of mind that go beyond language itself, affecting your experiences, perceptions, associations, feelings, memories and orientation in the world.

|
Recent comments
5 hours 34 min ago
1 day 2 hours ago
2 days 7 hours ago
2 days 7 hours ago
1 week 8 hours ago
1 week 8 hours ago
1 week 8 hours ago
4 weeks 1 day ago
4 weeks 6 days ago
4 weeks 6 days ago