Functional

Uniqueness Typing Simplified

Uniqueness Typing Simplified, by Edsko de Vries, Rinus Plasmeijer, and David M. Abrahamson.

We present a uniqueness type system that is simpler than both Clean’s uniqueness system and a system we proposed previously. The new type system is straightforward to implement and add to existing compilers, and can easily be extended with advanced features such as higher rank types and impredicativity. We describe our implementation in Morrow, an experimental functional language with both these features. Finally, we prove soundness of the core type system with respect to the call-by-need lambda calculus.

Uniqueness typing is related to linear typing, and their differences have been discussed here before. Linear types have many applications. This paper describes the difference between linear and unique types:

In linear logic, variables of a non-linear type can be coerced to a linear type (dereliction). Harrington phrases it well: in linear logic, "linear" means "will not be duplicated" whereas in uniqueness typing, "unique" means "has not been duplicated".

In contrast to other papers on substructural typing, such as Fluet's thesis Monadic and Substructural Type Systems for Region-Based Memory Management, this paper classifies uniqueness attributes by a kind system. This possibility was mentioned in Fluet's thesis as well, Section 4.2, footnote 8, though the technique used here seems somewhat different.

Uniqueness typing is generally used to tame side-effects in purely functional languages. Uniqueness types have also been contrasted with monads on LTU, which are also used to tame effects, among other things. One point not discussed in that thread, is how straightforward it is to compile each approach into efficient code. It seems clear that uniqueness types have a straightforward, efficient compilation, but it's not clear to me how efficient monads can be, and how much work is required to make them efficient. This may make uniqueness or substructural types more suitable for just-in-time compilers than monads.

Data Types a la Carte

Data Types a la Carte. Wouter Swierstra.

This paper describes a technique for assembling both data types and functions from isolated individual components. We also explore how the same technology can be used to combine free monads and, as a result, structure Haskell’s monolithic IO monad.

This new Functional Pearl has been mentioned twice in comments (1, 2), and has now also appeared with comments on Phil Wadler's blog. Obviously it's time to put it on the front page.

Foundations for Structured Programming with GADTs

Foundations for Structured Programming with GADTs, Patricia Johann and Neil Ghani. POPL 2008.

GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of datatypes as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics for GADTs to derive expressive and principled tools -- analogous to the well-known and widely-used ones for algebraic and nested data types -- for reasoning about, programming with, and improving the performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to modify the notion of a higher-order functor so that GADTs can be seen as carriers of initial algebras of higher-order functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for simpler-but-equivalent ones for which initial algebra semantics can be derived.

I found this to be a really interesting paper, because the work had an opposite effect on my opinions from what I expected. Usually, when I see a paper give a really clean semantics to an idea, I end up convinced that this is actually a good idea. However, Johann and Ghani gave a really elegant treatment of GADTs, which had the effect of making me think that perhaps Haskell-style GADTs should not be added as is to new programming languages!

This is because of the way they give semantics to GADTs as functors |C| -> C, where C is the usual semantic category (eg, CPPO) and |C| is the discrete category formed from C that retains the objects and only the identity morphisms. If I understand rightly, this illustrates that the indices in a GADT are only being used as index terms, and their structure as types is going unused. So it's really kind of a pun, arising from the accident that Haskell has * as its only base kind. This makes me think that future languages should include indices, but that these index domains should not coincide with kind type. That is, users should be able to define new kinds distinct from *, and use those as indexes to types, and these are the datatypes which should get a semantics in the style of this paper.

When Is A Functional Program Not A Functional Program?

When Is A Functional Program Not A Functional Program?, John Longley. ICFP 1999.

In an impure functional language, there are programs whose behavior is completely functional (in that they behave extensionally on inputs) but the functions they compute cannot be written in the purely functional fragment of the language. That is, the class of programs with functional behavior is more expressive than the usual class of pure functional programs. In this paper we introduce this extended class of "functional" programs by means of examples in Standard ML, and explore what they might have to offer to programmers and language implementors.

After reviewing some theoretical background, we present some examples of functions of the above kind, and discuss how they may be implemented. We then consider two possible programming applications for these functions: the implementation of a search algorithm, and an algorithm for exact real-number integration. We discuss the advantages and limitations of this style of programming relative to other approaches. We also consider the increased scope for compiler optimizations that these functions would offer.

I like this paper because it shows how some of the most abstract bits of formal logic (realizability models of higher order logic) suggest actual programs you can write.

As a barely-related aside, even a brief look into this realizability stuff also taught me a good deal of humility -- for example, it seems that at higher types, what you can represent mathematically depends on the specific notion of computation you're using. So the mathematical consequences of the Church-Turing thesis are rather more subtle than we might initially expect. (Andrej Bauer discusses a related point in more detail in this blog post.)

OCaml Summer Project

I am pleased to announce the second OCaml Summer Project! The OSP is aimed at encouraging growth in the OCaml community by funding students over the summer to work on open-source projects in OCaml. We'll fund up to three months of work, and at the end of the summer, we will fly the participants out for a meeting in New York, where people will present their projects and get a chance to meet with other members of the OCaml community.

The project is being funded and run by Jane Street Capital. Jane Street makes extensive use of OCaml, and we are excited about the idea of encouraging and growing the OCaml community.

Our goal this year is to get both faculty and students involved. To that end, we will require joint applications from the student or students who will be working on the project, and from a faculty member who both recommends the students and will mentor them throughout the project. Each student will receive a grant of $5k/month for over the course of the project, and each faculty member will receive $2k/month. We expect students to treat this as a full-time commitment, and for professors to spend the equivalent of one day a week on the project.

We will also award a prize for what we deem to be the most successful project. Special consideration will be given to projects that display real polish in the form of good documentation, robust build systems, and effective test suites. We'll announce more details about the prize farther down the line.

If you'd like to learn more about the OSP and how to apply, you can look at our website here:

http://ocamlsummerproject.com

Please direct any questions or suggestions you have to osp@janestcapital.com. Also, this might be a nice place for people to leave feedback about the program.

(if one of the editors thought this was appropriate to move to the front page, I would be appreciative. I think it's something that would be of interest to a large part of LtU's readership.)

The YNot Project

The YNot Project

The goal of the Ynot project is to make programming with dependent types practical for a modern programming language. In particular, we are extending the Coq proof assistant to make it possible to write higher-order, imperative and concurrent programs (in the style of Haskell) through a shallow embedding of Hoare Type Theory (HTT). HTT provides a clean separation between pure and effectful computations, makes it possible to formally specify and reason about effects, and is fully compositional.

This was actually commented on here, by Greg Morrisett himself. Conceptually, this seems like it's related to Adam Chlipala's A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language. See, in particular, slides 23-24 of this presentation (PDF). More generally, computation and reflection seem to be gaining recognition as important features for the practical use of such powerful tools as Coq; see also SSREFLECT tactics for Coq and their accompanying paper A Small Scale Reflection Extension for the Coq system (PDF).

It's also interesting to observe that the work appears to depend on the "Program" keyword in the forthcoming Coq 8.2, the work behind which is known as "Russell," as referred to in this thread. Russell's main page in the meantime is here. Again, the point is to simplify programming with dependent types in Coq.

Update: Some preliminary code is available from the project page.

Recycling Continuations

Recycling Continuations, Jonathan Sobel and Daniel P. Friedman. ICFP 1998

If the continuations in functional data-structure-generating programs are made explicit and represented as records, they can be "recycled." Once they have served their purpose as temporary, intermediate structures for managing program control, the space they occupy can be reused for the structures that the programs produce as their output. To effect this immediate memory reclamation, we use a sequence of correctness-preserving program transformations, demonstrated through a series of simple examples. We then apply the transformations to general anamorphism operators, with the important consequence that all finite-output anamorphisms can now be run without any stack- or continuation-space overhead.

This is a fun paper, and exactly the kind of thing I find addictive: it uses some elegant theory to formalize and systematize a clever hackerly trick.

The worker/wrapper transformation

Andy Gill and Graham Hutton. The worker/wrapper transformation.

The worker/wrapper transformation is a technique for changing the type of a computation, usually with the aim of improving its performance. It has been used by compiler writers for many years, but the technique is little-known in the wider functional programming community, and has never been formalised. In this article we explain, formalise, and explore the generality of the worker/wrapper transformation. We also provide a systematic recipe for its use, and illustrate the power of this recipe using a range of examples.

While the basic idea behind the worker/wrapper transformation should be trivial to any programmer, this paper shows the added value that comes from formal analysis and the usefulness of the algebraic approach.

The paper may be slightly too long for those used to reading work of this type, but since all the examples are presented in detail it should be more accessible to newcomers than many other similar papers.

Theorem proving support in programming language semantics

We describe several views of the semantics of a simple programming language as formal documents in the calculus of inductive constructions that can be verified by the Coq proof system. Covered aspects are natural semantics, denotational semantics, axiomatic semantics, and abstract interpretation. Descriptions as recursive functions are also provided whenever suitable, thus yielding a a verification condition generator and a static analyser that can be run inside the theorem prover for use in reflective proofs. Extraction of an interpreter from the denotational semantics is also described. All different aspects are formally proved sound with respect to the natural semantics specification.

More work on mechanized metatheory with an eye towards naturalness of representation and automation. This seems to me to relate to Adam Chlipala's work on A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language, which relies on denotational semantics and proof by reflection, in crucial ways. More generally, it seems to reinforce an important trend in using type-theory-based theorem provers to tackle programming language design from the semantic point of view (see also A Very Modal Model of a Modern, Major, General Type System and Verifying Semantic Type Soundness of a Simple Compiler). I find the trend exciting, but of course I also wonder how far we can practically go with it today, given that the overwhelming majority of the literature, including our beloved Types and Programming Languages, is based on A Syntactic Approach to Type Soundness. Even the upcoming How to write your next POPL paper in Coq at POPL '08 centers on the syntactic approach.

Is the syntactic approach barking up the wrong tree, in the long term? The semantic approach? Both? Neither?

CUFP write-up

A write-up of the Commercial Users of Functional Programming meeting held this October is available, for those of us who didn't attend. The write-up is well written and thought provoking (it was written by Jeremy Gibbons, so that's not a surprise).

The goal of the Commercial Users of Functional Programming series of workshops is to build a community for users of functional programming languages and technology. This, the fourth workshop in the series, drew 104 registered participants (and apparently more than a handful of casual observers).

It is often observed that functional programming is most widely used for language related projects (DSLs, static analysis tools and the like). Part of the reason is surely cultural. People working on projects of this type are more familiar with functional programming than people working in other domains. But it seems that it may be worthwhile to discuss the other reasons that make functional languages suitable for this type of work. There are plenty of reasons, many of them previously discussed here (e.g., Scheme's uniform syntax, hygiene, DSELs), but perhaps the issue is worth revisiting, seeing as this remains the killer application for functional programming, even taking into account the other types of project described in the workshop.

XML feed