Functional

The Next Stage of Staging

The Next Stage of Staging, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:

This position paper argues for type-level metaprogramming, wherein types and type declarations are generated in addition to program terms. Term-level metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover ML-style module generation and show the possibilities they open up for type specialization and overhead-free parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a long-term program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets next-stage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with type-declaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.

A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true first-class modules of 1ML, perhaps there's a clearer way forward.

Conservation laws for free!

In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:

Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.

In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.

Seemingly impossible programs

In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:

Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise thought of. This is
the case, in particular, for semantics that incorporate topological
ingredients (limits, continuity, openness, compactness). For example,
it turns out that some function types (X -> Y) with X infinite (but
compact) do have decidable equality, contradicting perhaps popular
belief, but certainly not (higher-type) computability theory. More
generally, one can often check infinitely many cases in finite time.

I will show you such programs, run them fast in surprising instances,
and introduce the theory behind their derivation and working. In
particular, I will study a single (very high type) program that (i)
optimally plays sequential games of unbounded length, (ii) implements
the Tychonoff Theorem from topology (and builds finite-time search
functions for infinite sets), (iii) realizes the double-negation shift
from proof theory (and allows us to extract programs from classical
proofs that use the axiom of countable choice). There will be several
examples in the languages Haskell and Agda.

A shorter version (coded in Haskell) appears in Andrej Bauer's blog.

sml-family.org

In his blog, Bob Harper, in joint effort with Dave MacQueen and Lars Bergstrom, announces the launch of sml-family.org:

The Standard ML Family project provides a home for online versions of various formal definitions of Standard ML, including the "Definition of Standard ML, Revised" (Standard ML 97). The site also supports coordination between different implementations of the Standard ML (SML) programming language by maintaining common resources such as the documentation for the Standard ML Basis Library and standard test suites. The goal is to increase compatibility and resource sharing between Standard ML implementations.

The site includes a history section devoted to the history of ML, and of Standard ML in particular. This section will contain a collection of original source documents relating to the design of the language.

Inferring algebraic effects

Logical methods in computer science just published Matija Pretnar's latest take on algebraic effects and handlers:

We present a complete polymorphic effect inference algorithm for an ML-style language with handlers of not only exceptions, but of any other algebraic effect such as input & output, mutable references and many others. Our main aim is to offer the programmer a useful insight into the effectful behaviour of programs. Handlers help here by cutting down possible effects and the resulting lengthy output that often plagues precise effect systems. Additionally, we present a set of methods that further simplify the displayed types, some even by deliberately hiding inferred information from the programmer.

Pretnar and Bauer's Eff has made previous appearances here on LtU. Apart from the new fangled polymorphic effect system, this paper also contains an Eff tutorial.

Breaking the Complexity Barrier of Pure Functional Programs with Impure Data Structures

Breaking the Complexity Barrier of Pure Functional Programs with Impure Data Structures by Pieter Wuille and Tom Schrijvers:

Pure functional programming language offer many advantages over impure languages. Unfortunately, the absence of destructive update, imposes a complexity barrier. In imperative languages, there are algorithms and data structures with better complexity. We present our project for combining existing program transformation techniques to transform inefficient pure data structures into impure ones with better complexity. As a consequence, the programmer is not exposed to the impurity and retains the advantages of purity.

This paper is along the same lines a question I asked a couple of years ago. The idea here is to allow programming using immutable interfaces, and then automatically transform it into a more efficient mutable equivalent.

Apple Introduces Swift

Apple today announced a new programming language for their next version of Mac OS X and iOS called Swift.

The Language Guide has more details about the potpourri of language features.

Functional Geometry and the Traite ́ de Lutherie

Functional Geometry and the Traite ́ de Lutherie by Harry Mairson, Brandeis University.

We describe a functional programming approach to the design of outlines of eighteenth-century string instruments. The approach is based on the research described in Francois Denis’s book, Traite ́ de lutherie. The programming vernacular for Denis’s instructions, which we call functional geometry, is meant to reiterate the historically justified language and techniques of this musical instrument design. The programming metaphor is entirely Euclidean, involving straightedge and compass constructions, with few (if any) numbers, and no Cartesian equations or grid. As such, it is also an interesting approach to teaching programming and mathematics without numerical calculation or equational reasoning.

The advantage of this language-based, functional approach to lutherie is founded in the abstract characterization of common patterns in instrument design. These patterns include not only the abstraction of common straightedge and compass constructions, but of higher-order conceptualization of the instrument design process. We also discuss the role of arithmetic, geometric, harmonic, and subharmonic proportions, and the use of their rational approximants.

The Size-Change Termination Principle for Constructor Based Languages

The Size-Change Termination Principle for Constructor Based Languages, by Pierre Hyvernat:

This paper describes an automatic termination checker for a generic first-order call-by-value language in ML style. We use the fact that value are built from variants and tuples to keep some information about how arguments of recursive call evolve during evaluation. The result is a criterion for termination extending the size-change termination principle of Lee, Jones and Ben-Amram that can detect size changes inside subvalues of arguments. Moreover the corresponding algorithm is easy to implement, making it a good candidate for experimentation.

Looks like a relatively straightforward and complete description of a termination checker based on a notion of 'sized types' limited to first-order programs. LtU has covered this topic before, although this new paper doesn't seem to reference that particular Abel work.

Dependently-Typed Metaprogramming (in Agda)

Conor McBride gave an 8-lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory:

Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedently-typed programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the “pattern functors” which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of “container”. We shall expose the double-life containers lead as “interaction structures” describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductive-recursive definitions, and we use that power to build universes of dependent types.

The lecture notes, code, and video captures are available online.

As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging.

XML feed