A Formulae-as-Types Interpretation of Subtractive Logic

A Formulae-as-Types Interpretation of Subtractive Logic

We present a formulae-as-types interpretation of Subtractive Logic (i.e. bi-intuitionistic logic). This presentation is two-fold: we first define a very natural restriction of the lambda-μ-calculus which is closed under
reduction and whose type system is a constructive restriction of the Classical Natural Deduction. Then we extend this deduction system conservatively to Subtractive Logic. From a computational standpoint,
the resulting calculus provides a type system for first-class coroutines (a restricted form of first-class continuations).

Yet another connection between subtractive logic and control. I remember the author mentioned on LtU, but I cannot find any citations.

GHC Survey Results

The results are in for the 2005 Glasgow Haskell Compiler user survey, with a summary and all the raw data. The comments were the highlight for me; see for instance Applications I use GHC for.

(Previous LtU mention)

Generics are a mistake?

Generics Considered Harmful

Ken Arnold, "programmer and author who helped create Jini, JavaSpaces, Curses, and Rogue", writes that the usefulness of generics is outweighed by their complexity. Ken is talking about Java 5, but such critiques are well-known for C++, and C# is not immune either. Ken describes the Java case as follows:

So, I don't know how to ease into this gently. So I'll just spit it out.

Generics are a mistake.

This is not a problem based on technical disagreements. It's a fundamental language design problem.

Any feature added to any system has to pass a basic test: If it adds complexity, is the benefit worth the cost? The more obscure or minor the benefit, the less complexity its worth. Sometimes this is referred to with the name "complexity budget". A design should have a complexity budget to keep its overall complexity under control.

[...]

Which brings up the problem that I always cite for C++: I call it the "Nth order exception to the exception rule." It sounds like this: "You can do x, except in case y, unless y does z, in which case you can if ..."

Humans can't track this stuff. They are always losing which exception to what other exception applies (or doesn't) in any given case.

[...]

Without [an explicit complexity] budget, it feels like the JSR process ran far ahead, without a step back to ask “Is this feature really necessary”. It seemed to just be understood that it was necessary.

It was understood wrong.

The article contains a few simple supporting examples, including the interesting definition of Java 5's Enum type as:

Enum<T extends Enum<T>>

...which "we're assured by the type theorists ... we should simply not think about too much, for which we are grateful."

If we accept the article's premise, here's a question with an LtU spin: do the more elegant, tractable polymorphic inferencing type systems, as found in functional languages, improve on this situation enough to be a viable alternative that could address these complexity problems? In other words, are these problems a selling point for better type systems, or another barrier to adoption?

[Thanks to Perry Metzger for the pointer.]

Revisiting coroutines

Revisiting coroutines

This paper defends the revival of coroutines as a general control abstraction. After proposing a new classification of coroutines, we introduce the concept of full asymmetric coroutines and provide a precise definition for it through an operational semantics. We then demonstrate that full coroutines have an expressive power equivalent to one-shot continuations and one-shot partial continuations. We also show that full asymmetric coroutines and one-shot partial continuations have many similarities, and therefore present comparable benefits. Nevertheless, coroutines are easier implemented and understood, specially in the realm of procedural languages. Finally, we provide a collection of programming examples that illustrate the use of full asymmetric coroutines to support direct and concise implementation of several useful control behaviors, including cooperative multitasking.

Taking into account real or imaginary interest to control operators on LtU, I believe this paper makes nice reading. See also Coroutines in Lua.

LispNYC's proposed Summer of Code projects

Interesting list.

Personally, I am most interested in seeing results come out of the PltStepper, PltDb and ErLisp projects.

Scottish Programming Language Seminar


Scottish Programming Language Seminar
Many thanks to Greg Michaelson and Phil Trinder for organizing this meeting!

This is the third meeting of SPLS, and its great to see that SPLS has already grown into what it was intended to be: a robust forum for language researchers in Scotland (and beyond). We had in attendance thirty or forty programming language researchers, from U of Edinburgh, Heriot Watt, U of Glasgow, Strathclyde, and St Andrews, as well as speakers from Nottingham and Hertfordshire, both of whom had traveled to Edinburgh just for the occasion.

Greg Michaelson noted that, as it happened, none of the speakers was Scottish. (Richard is English, Anne is French, Conor is Irish, De Lesley is American, and Sven-Bodo is German.) I suggested that we parse the name differently, and say that it is a seminar for work on Scottish Programming Languages. With POP2, SASL, Hope, ML, Haskell, and (soon, I hope) Links, we have quite enough to keep us busy!

We were hosted by the International Centre for Mathematical Sciences, which is located in the house in which James Clerk Maxwell was born. I work in The James Clerk Maxwell Building in the north campus of the University of Edinburgh, affectionately known as JCMB to its inmates (and rather a cheek in naming, as Maxwell's main connection to Edinburgh is that they refused to hire him). It was a pleasure to visit the other JCMB, which is more appealing in appearance. Cheers to Greg and Phil for finding a wonderful venue.

Richard Connor, Strathclyde University
Typed vs Untyped: performing a real experiment?

Put forward the excellent idea that we should actually try to experimentally measure what impact type systems have on programmer productivity and program reliability and maintanability. Unfortunately, I suspect this will fall foul of a symptom I noticed years ago in papers that report experiments on programming languages. If the person who wrote the paper believed that imperative languages were better than functional languages, thenthat is what his experiments proved, and if the person who wrote the paper believed the opposite then his experiments proved the opposite. Richard's experiment consists of comparing untyped and typed variants of Javascript. But his untyped variant supports the undefined value and his typed variant does not, so its not clear whether he'll be measuring the effects of typing or of flexible null values. I can suggest a number of different experiments, which I suspect would yield rather different results: compare Java 1.4 and Java 1.5 (with and without generics -- this has the great advantage that it is working with two real languages), compare Haskell with and without types, compare Scheme with and without types (using the type inferencer in Dr Scheme for the typed version, so again you have two real languages).

Anne Benoit, Edinburgh University
Enhancing the performance of Grid Applications with Skeletons and Process Algebras

Skeletons with performance models applied to automatically choose the best implementation. Seems like nice work. Unfortunately, I'm not very familiar with skeletons, so I would have benefited from some simple, complete examples to convey the basic ideas.

Conor McBride, Nottingham University
Idioms

A few years ago, John Hughes noticed that sometimes a monad is too strong, and he introduced a weaker structure called 'arrows' with many interesting applications. (Every monad is an arrow, but not conversely.) Conor has now noticed that there is another useful structure halfway between arrows and monads, which he calls idioms. (Every monad is an idiom and every idiom is an arrow, but not conversely.)


class Idiom i where
k : x -> i x
s : i (x -> y) -> i x -> i y

(To see why these are called k and s, take i x = a -> x.) It was a Pearl of a talk, and I encourage him to write it up as a pearl for JFP.

DeLesley Hutchins, Edinburgh University
Feature Oriented Programming

An object calculus that is good for "deep mixin" combination. Unlike most typed languages it is not stratified -- objects and their types are considered to be the same sorts of things, with the

Sven-Bodo Sholz, University of Hertfordshire
Using Sub-types and Intersection Types to strike the Balance Between Static and Dynamic Typing

An interesting companion to Richard's talk. Sven-Bodo has a functional language for scientific programming with an optimizing compiler that achieves performance comparable to Fortran. (And a compiler consisting of 500K lines of C.) This talk focussed on the type system, which in effect ranges from highly typed to untyped. He has a hierarchy of types for arrays, ranging from no information to specific information.


[*] - any array
[] - any scalar [.] - any vector [.,.] - any matrix
[3] - vector of length 3, [4] - vector of length 4, [2,3] - matrix with 2 rows and 3 columns, ...

The system makes heavy use of intersection types. It tries to give more precise types (lower down in the hierarchy sketched above), but moves up the hierarchy when it is too hard to give a precise type. (It wasn't clear to me under which circumstances it would move up the hierarchy.) This complements Milner's principle: Well-typed programs can't go wrong. In this system, if a program is not well-typed it will typically move up the hierarchy. Hence. one can't guarantee that well-typed programs won't go wrong (unless one inspects them and determines that all subterms are given precise types rather than imprecise types), but one can guarantee that ill-typed programs must go wrong!

A fine day. My thanks to all those who made it happen! The next meetings are scheduled for Strathclyde in September and St Andrews in January.

How To Read a Paper

Some of the papers mentioned recently are quite advanced, and require careful reading.

It occured to me that not everyone here is adept at reading papers of this sort, and that some tips might be helpful.

So if you are one of those who constantly read papers, how about describing your reading habits?

Do you skim first? Do you follow the detailed proofs on first rading? Do you try to execute code?

Specifically, what would you suggest to someone who asks for help reading A Monadic Framework for Subcontinuations?

And no, "go back to school" isn't an acceptable answer...

A Monadic Framework for Subcontinuations

A Monadic Framework for Subcontinuations

Functional and delimited continuations are more expressive than traditional
abortive continuations and they apparently seem to require a framework
beyond traditional continuation or monadic semantics. We show that this is not the
case: standard continuation semantics is sufficient to explain directly the common
control operators for delimited continuations. This implies a monadic framework for
typed and encapsulated functional and delimited continuations which we design and
implement as a Haskell library.

The Underhanded C Contest

Inspired by Daniel Horn's Obfuscated V contest in the fall of 2004, we hereby announce an annual contest to write innocent-looking C code implementing malicious behavior. In many ways this is the exact opposite of the Obfuscated C Code Contest: in this contest you must write code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should do something subtly evil.

I wasn't aware of this contest. This concept sounds like fun, the idea being to write source code that easily passes visual inspection by other programmers.

The challenge for the first UCC is to write a simple program that performs some basic image-processing operation, for example smoothing or resampling, but manages to conceal a unique imperceptible fingerprint in each image it outputs.

Crystal Scheme: A Language for Massively Parallel Machines

Crystal Scheme: A Language for Massively Parallel Machines
Massively parallel computers are built out of thousands conventional but powerful processors with independent memories. Very simple topologies mainly based on physical neighbourhood link these processors. The paper discusses extensions to the Scheme language in order to master such machines. Allowing arguments of functions to be concurrently evaluated introduces parallelism. Migration across the different processors is achieved through a remote evaluation mechanism. First-class continuations offering some semantical problems with respect to concurrency, we propose a neat semantics for them and then show how to build, in the language itself, advanced concurrent constructs such as futures. Eventually we comment some simulations, with various topologies and migration policies, which enables to appreciate our previous linguistical choices and confirms the viability of the model.
Note the year of publication -1991. The bibliography goes back to 1980. My question might be naive, but what are major inventions in the area of hardware parallelism since then? What ideas comprising, e.g., Cell architecture were not known at 1991? Is Cell just an engineering achievement (assuming it is an achievement) or a scientific as well?

And, returning to PLT, can Crystal Scheme be useful for Cell?