archives

What do you mean by studying "programming languages"?

Reading some of the recent threads, it occured to me that there's room for a new thread in the spirit of the Getting Started thread and threads like Explaining monads and Why type systems are interesting.

The issue I want to raise is what is at the focus of the study of programming languages. By this I mean, what is the thing (or things) programming language researchers study. I do not want to discuss what should be done, at the moment, but rather orient outsiders about the actual practice in the field.

Specifically, my aim is to explain which aspects of languages are studied, and to let people know how these research areas are called, and perhaps where to find more information about new results etc. I am not thinking about specific lines of research, but rather about the kinds of things researchers study.

My intention will hopefully be clearer after reading my answer to the question in the title of this post:

Most of the time, programming language theory (PLT) is concerned with the semantics (i.e., meaning) of specific programming constructs. Much less outstanding research is done on other aspects, such as syntax, implementation and runtime issues, and languages as wholes (compared to the study indvidual constructs).

This is, of course, a very incomplete answer to the question. Let the discussion begin...

Linear types for aliased resources

Linear types for aliased resources. Chris Hawblitzel.

Type systems that track aliasing can verify state-dependent program properties. For example, such systems can verify that a program does not access a resource after deallocating the resource. The simplest way to track aliasing is to use linear types, which on the surface appear to ban the aliasing of linear resources entirely. Since banning aliasing is considered too draconian for many practical programs, researchers have proposed type systems that allow limited forms of aliasing, without losing total control over state-dependent properties. This paper describes how to encode one such system, the capability calculus, using a type system based on plain linear types with no special support for aliasing. Given well-typed capability calculus source programs, the encodings produce well-typed target programs based on linear types. These encodings demonstrate that, contrary to common expectations, linear type systems can express aliasing of linear resources.

The prose is slightly confusing, but the code examples help.

Sections 1 & 2 should be enough for those who only want to understand the problem with aliasing, and get a glimpse of what linear types are.

Terminology proposal

From a comment:
(Likewise it is ridiculous to say Lisp has no syntax--it certainly does; just a rather simple one).

I think even this is too much. Lisp's true syntax is anything but simple, and of course in fact it varies from Lisp to Lisp. I propose that we use the phrase latent syntax to describe the Lisp approach to syntax, by analogy with latent types.

The idea, of course, is that Lisp has a very trivial syntactic structure at one level, but there's an entire world of syntactic constraints that are latent in the surface syntax, but that are clearly syntax rather than semantics and which the programmer (and many analysis tools) must be aware of and manage. Note all the things in R5RS that are labeled as "syntax" or "derived syntax"...

Does this distinction make sense? I really like the phrase latent syntax to describe this, but maybe there's something better? There's abstract syntax, of course, but I'm not sure that's really the same thing. For instance, C and Java have concrete and abstract syntax, but neither of them really has latent syntax. I'm not sure what the opposite would be... manifest syntax, maybe?

Obviously this applies to XML as well. Is there a standard terminology in that community? "Schema" isn't particularly useful...