General

Extending the Scope of Syntactic Abstraction

Extending the Scope of Syntactic Abstraction by Oscar Waddell and R. Kent Dybvig, POPL '99. (Also: Waddell's thesis with the same title.)

The benefits of module systems and lexically scoped syntactic abstraction (macro) facilities are well-established in the literature. This paper presents a system that seamlessly integrates modules and lexically scoped macros. The system is fully static, permits mutually recursive modules, and supports separate compilation. We show that more dynamic module facilities are easily implemented at the source level in the extended language supported by the system.

This paper is probably known to many LtUers, but it's never been posted, and I find it very enjoyable.

It introduces two very simple forms, (module name exports-list body) for naming and enclosing a body of code containing definitions and expressions, and (import module-name) for importing the definitions from such a module into the current scope.

Module names are lexically scoped just like variables, and modules can appear wherever definitions can occur, so one can define modules (say) inside a lambda. Furthermore, modules and import forms may also be generated by macros.

They show how more advanced features (such as qualified references ("module::var"), anonymous modules, selective importing and renaming, and mutually recursive modules, among others) can be built on top of this simple base using a hygienic macro system, cleverly and without much fuss.

Side note: such a "syntactic" module system can be contrasted with R6RS's "static" library system. There is currently some discussion to this effect in the Scheme community.

Can a Biologist Fix a Radio?

From the title Can a Biologist Fix a Radio? — or, What I Learned while Studying Apoptosis(Y. Lazebnik 2004) would seem pretty off topic for LtU. It starts with a humorous take on how biologist might try to understand the workings of a radio, but it ends in a plea for (computer aided) formal languages and quantitative modeling of biological systems.

The question is how to facilitate this change, which is not exactly welcomed by many experimental biologists, to put it mildly. Learning computer programming was greatly facilitated by BASIC, a language that was not very useful to solve complex problems, but was very efficient in making one comfortable with using a computer language and demonstrating its analytical power. Similarly, a simple language that experimental scientists can use to introduce themselves to formal descriptions of biological processes would be very helpful in overcoming a fear of long forgotten mathematical symbols. Several such languages have been suggested but they are not quantitative, which limits their value. Others are designed with modeling in mind but are too new to judge as to whether they are user friendly. However, I hope that it is only a question of time before a user friendly and flexible formal language will be taught to biology students, as it is taught to engineers, as a basic requirement for their future studies. My advice to experimental biologists is to be prepared.

Further, at a meta-level, the issues he raises seem to directly mirror the challenges software developers and researchers face in trying to understand and design complex computing systems. If nothing else it's good to know we're not alone.

And it doesn't hurt at all that it's a fun read.

Reminder: OOPSLA is now SPLASH

The call for papers for the SPLASH conference is now up.

SPLASH is the conference formerly known as OOPLSA, and the OOPSLA name is now being used for one of the two main colocated events at SPLASH (the other being Onward!). So if you've got a paper full of interesting results, send it in by March 25.

EDIT: both Martin Rinard in the comments below, and William Cook in email, have been very emphatic that the name change is to make clear that the conference is not just about objects any more -- anything that's (a) about programming languages maximally broadly construed, and (b) good work, is fair game for the conference now. Sorry for any confusion!

Recent Progress in Quantum Algorithms

Quantum theory has acquired a reputation as an impenetrable theory accessible only after acquiring a significant theoretical physics background. One of the lessons of quantum computing is that this is not necessarily true: quantum computing can be learned without mastering vast amounts of physics, but instead by learning a few simple differences between quantum and classical information.

A well written article in the CACM.

There's also slides for a talk by Rob Pike on the same topic.

Google TechTalk: The Evolution of End-User Programming

End-User Programming has been a topical discussion lately in mainstream software outlets. The IEEE journal Software recently had an issue dedicated to end-user programming challenges; see Joel Brandt's Opportunistic Programming: Writing Code to Prototype, Ideate and Discover and Martin Erwig's Software Engineering for Spreadsheets. Also, a few years ago a consortium of universities formed End-Users Shaping Effective Software, which includes Martin Erwig's PLT work on bringing type systems to spreadsheets.

Recently, Google invited Allen Cypher to give a TechTalk on The Evolution of End-User Programming, which appears to be a recapitulation of his VL/HCC paper by the same name. Allen was the editor of Watch What I Do (an LtU recommended reading).

Towards the end of the talk, Allen mentions the practical issues of knowing when to use what tool, and that novice users struggle with finding the right tool for the right job. What's notable about discussion of end-user software engineering is how little attention its proponents pay to its critics biggest criticism: Security. In the IEEE Software realm, probably the most open critic has been Warren Harrison (see: The Dangers of End-User Programming). For example, Ko's 2009 ACM Computing Survey The State of the Art in End-User Software Engineering only mentions security once, in the context of designing end-user description languages for security, but does not assess how well this technique compares to techniques software engineers might employ. It seems strange that leading researchers in visual languages and end-user programming do not discuss the potential usage of object capability systems, especially as companies try to monetize a percentage of the value added by users who mash-up their service with other services.

The Development of Sage

Sage is a project to create a viable free open source alternative to Magma, Maple, Mathematica and Matlab. The lead developer/manager William Stein has recently written Mathematical Software and Me: A Very Personal Recollection, a rather enjoyable story of his experience with mathematical software, especially Magma, and how Sage came to be.

One of the difficulties of writing broadly useful math software is the sheer size and scope of such a project. It is easily outside the abilities of even the most prodigious lone developer. So the focus of Sage, at least up until recently, has been on creating Python-based interfaces to existing mathematical software. For example, for symbolic calculation the Sage distribution includes Maxima (written in Common Lisp), a fork of Macsyma dating back to the early 1980s, and released as open-source software by the US Department of Energy approximately 10 years ago. In addition to Maxima, Sage includes the ability to call out to Magma, Mathematica, and Maple.

There are some interesting PLT-related snippets, for example, Magma's language is frequently criticized, although its algorithms are frequently praised. In conversations with others, OCaml and Haskell were brought up, but William Stein chose Python because he felt that it was more accessible. Also, Axiom, which includes the dependently-typed language Aldor, was rejected in favor of Maxima because Maxima was less esoteric and much more widely used.

Eleven Theses on Clojure

Tim Bray shares his conclusions about Clojure.

To get your juices going: He claims "It’s the Best Lisp Ever"! (And that tail call optimization is a red herring.)

I have been contemplating a Clojure department for awhile now, but heck, we may be too far behind the curve by now...

John Hughes on Erlang and Haskell

John Hughes talks about his experience with Erlang vs. Haskell in an InfoQ Interview. While the discussions about strict vs lazy, pure vs side effecting, and dynamic vs static typing are interesting he raises a good question for the LtU crowd at the end:

I think functional programming is a very interesting concept for the future and for the present indeed. One of the things I do wonder about though, is when I got interested in the field, the mainstream was probably Fortran and COBOL and even C was fairly new at that time. The functional programming pioneers spoke of an order of magnitude improvement in productivity and I think functional programming has delivered that.

If you compare Haskell programs to C code or even C++ often, they are about an order of magnitude smaller and simpler. The same is for Erlang, those results are being validated in the industry. Where is the next order of magnitude coming from? I wish I had an answer to that question because it's hard to see almost. When you look at a beautiful Haskell program, how could this be 10 times shorter? But I think we need to be asking ourselves that kind of question. If I had a good idea there, I would spend the rest of my career working on it.

So, LtU, where is the next order of magnitude coming from?

Relations of Language and Thought: The View from Sign Language and Deaf Children

Relations of Language and Thought: The View from Sign Language and Deaf Children provides an interesting angle on the Sapir-Whorf hypothesis that we periodically discuss on LtU. A small sample from Google Books is available.

...Hypothesis concerning language and thought...:

  • Language equals thought. Perhaps the simplest view of language and thought is that they are essentially the same thing. This position is most frequently ascribed to American behaviorists, and especially to John Watson, who argued that thought is just subvocal speech.
  • Language and thought are independent. This view, most often attributed to theorists like Noam Chomsky and Jerry Fodor, suggests that the development of language and the development of cognition are distinct, dependending on different underlying processes and experiences.
  • Language determines thought. In the form usually identified with the linguistic determinism and linguistic relativity theories of Sapir and Whorf, this perspective directly entails a correlation between language skill and cognitive skill. One implication of this view is that individuals who have "inferior" (or superlative) language are expected to have "inferior" (or superlative) thought. Implicitly or explicitly, such a perspective has been used as a rationale for an emphasis on spoken language for deaf children by those who have seen sign language as little more than a set of pragmatic gestures.
...The more interesting question... is whether growing up with exposure to a signed language affects cognition in a way different from growing up with a spoken language. Indeed, that is one of the fundamental questions of this volume. While we fully agree... that any strong form of the Sapir-Whorf position appears untenable, it also seems clear that language can affect and guide cognition in a variety of ways. Much of what a child knows about the world, from religion to the habitats of penguins, is acquired through language.

Sign language is an obvious candidate for linguistic study, since the mode is visual as opposed to oral/aural. The summary of one of the authors is telling:

The conclusion that American Sign Language (ASL) is an independent, noncontrived, fully grammatical human language comparable to any spoken language has been supported by over 30 years of research. Recent research has shown that ASL displays principles of organization remarkably like those for spoken languages, at discourse, semantic, syntactic, morphological, and even phonological levels. Furthermore, it is acquired, processed, and even breaks down in ways analogous to those found for spoken languages. The similarities between signed and spoken languages are strong enough to make the differences worth investigating. In the third section of this chapter, I will argue that although there are differences in detail, the similarities are strong enough to conclude that essentially the same language mechanism underlies languages in either modality.

On a programming language level, I can't help but think that sign language offers valuable clues into the nature of visual PLs (though I haven't quite nailed down any specifics). ASL on Wikipedia informs us that signs can be broken down into three categories:

  • Transparent: Non-signers can usually correctly guess the meaning
  • Translucent: Meaning makes sense to non-signers once it is explained
  • Opaque: Meaning cannot be guessed by non-signers
With the majority of signs being opaque. As much as those who design visual languages would like them to be intuitive - falling into the Transparent and Translucent category - I figure you still have to end up using many signs that are only meaningful internally to the language at hand.

On a personal level, I have recently been attempting to delve into ASL. I've almost got the alphabet and numbers down, and have a vocabulary of about 100 additional signs - which probably means that I'm at the proficiency level of somewhere between ankle biter and sesame street. I do find it to be a fascinating language. I noticed when I was looking at the course offerings for college (my son started university this year) that ASL is now offered for foreign language credit (wish it had been offered when I was a student all those years ago).

Function Interface Models for Hardware Compilation

Function Interface Models for Hardware Compilation. Dan Ghica.

The problem of synthesis of gate-level descriptions of digital circuits from behavioural specifications written in higher-level programming languages (hardware compilation) has been studied for a long time yet a definitive solution has not been forthcoming. The argument of this essay is mainly methodological, bringing a perspective that is informed by recent developments in programming-language theory. We argue that one of the major obstacles in the way of hardware compilation becoming a useful and mature technology is the lack of a well defined function interface model, i.e. a canonical way in which functions communicate with arguments. We discuss the consequences of this problem and propose a solution based on new developments in programming language theory. We conclude by presenting a prototype implementation and some examples illustrating our principles.

Ghica's work has been mentioned before, but not this particular article, which is a follow-up to that work. The paper touches on a number of common LtU topics: language design for hardware description, game semantics, Reynolds's SCI... I don't know much about hardware design, so I'd be interested to hear what people think.

XML feed