Functional
Freer Monads, More Extensible Effects, by Oleg Kiselyov and Hiromi Ishii:
We present a rational reconstruction of extensible effects, the recently proposed alternative to monad transformers, as the confluence of efforts to make effectful computations compose. Free monads and then extensible effects emerge from the straightforward term representation of an effectful computation, as more and more boilerplate is abstracted away. The generalization process further leads to freer monads, constructed without the Functor constraint.
The continuation exposed in freer monads can then be represented as an efficient typealigned data structure. The end result is the algorithmically efficient extensible effects library, which is not only more comprehensible but also faster than earlier implementations. As an illustration of the new library, we show three surprisingly simple applications: nondeterminism with committed choice (LogicT), catching IO exceptions in the presence of other effects, and the semiautomatic management of file handles and other resources through monadic regions.
We extensively use and promote the new sort of ‘laziness’, which underlies the left Kan extension: instead of performing an operation, keep its operands and pretend it is done.
This looks very promising, and includes some benchmarks comparing the heavily optimized and specialcased monad transformers against this new formulation of extensible effects using Freer monads.
See also the reddit discussion.
Reagents: Expressing and Composing Finegrained Concurrency, by Aaron Turon:
Efficient communication and synchronization is crucial for finegrained parallelism. Libraries providing such features, while indispensable, are difficult to write, and often cannot be tailored or composed to meet the needs of specific users. We introduce reagents, a set of combinators for concisely expressing concurrency algorithms. Reagents scale as well as their handcoded counterparts, while providing the composability existing libraries lack.
This is a pretty neat approach to writing concurrent code, which lies somewhere between manually implementing lowlevel concurrent algorithms and STM. Concurrent algorithms are expressed and composed seminaively, and Reagents automates the retries for you in case of thread interference (for transient failure of CAS updates), or they block waiting for input from another thread (in case of permanent failure where no input is available).
The core seems to be kCAS with synchronous communication between threads to coordinate reactions on shared state. The properties seem rather nice, as Aaron describes:
When used in isolation, reagents are guaranteed to perform only the CASes that the handwritten algorithm would, so they introduce no overhead on sharedmemory operations; by recoding an algorithm use reagents, you lose nothing. Yet unlike handwritten algorithms, reagents can be composed using choice, tailored with new blocking behavior, or combined into larger atomic blocks.
The benchmarks in section 6 look promising. This appears to be work towards Aaron's thesis which provides many more details.
Interesting survey.
Based on a brief look I am not sure I agree with all the conclusions/rankings. But most seem to make sense and the Notable Libraries and examples in each category are helpful.
Don Syme receives the Royal Academy of Engineering's Silver Medal for his work on F#. The citation reads:
F# is known for being a clear and more concise language that interoperates well with other systems, and is used in applications as diverse asanalysing the UK energy market to tackling money laundering. It allows programmers to write code with fewer bugs than other languages, so users can get their programme delivered to market both rapidly and accurately. Used by major enterprises in the UK and worldwide, F# is both crossplatform and open source, and includes innovative features such as unitofmeasure inference, asynchronous programming and type providers, which have in turn influenced later editions of C# and other industry languages.
Congratulations!
SelfRepresentation in Girard’s System U, by Matt Brown and Jens Palsberg:
In 1991, Pfenning and Lee studied whether System F could support a typed selfinterpreter. They concluded that typed selfrepresentation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kindpolymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?
We show that it is not and present a typed selfrepresentation for Girard’s System U, the first for a λcalculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our selfrepresentation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed selfapplicable operations: a selfinterpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuationpassingstyle (CPS) transformation – the first typed selfapplicable CPS transformation. Our techniques could have applications from verifiably typepreserving metaprograms, to growable typed languages, to more efficient selfinterpreters.
Typed selfrepresentation has come up here on LtU in the past. I believe the best selfinterpreter available prior to this work was a variant of Barry Jay's SFcalculus, covered in the paper Typed SelfInterpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed selfinterpreters without resorting to undecidable type:type rules.
However, being combinator calculi, they're not very similar to most of our programming languages, and so selfinterpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kindpolymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether selfinterpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.
The Next Stage of Staging, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:
This position paper argues for typelevel metaprogramming, wherein types and type declarations are generated in addition to program terms. Termlevel metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover MLstyle module generation and show the possibilities they open up for type specialization and overheadfree parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a longterm program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets nextstage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with typedeclaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.
A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true firstclass modules of 1ML, perhaps there's a clearer way forward.
In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under timeshifting.
In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:
Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise thought of. This is
the case, in particular, for semantics that incorporate topological
ingredients (limits, continuity, openness, compactness). For example,
it turns out that some function types (X > Y) with X infinite (but
compact) do have decidable equality, contradicting perhaps popular
belief, but certainly not (highertype) computability theory. More
generally, one can often check infinitely many cases in finite time.
I will show you such programs, run them fast in surprising instances,
and introduce the theory behind their derivation and working. In
particular, I will study a single (very high type) program that (i)
optimally plays sequential games of unbounded length, (ii) implements
the Tychonoff Theorem from topology (and builds finitetime search
functions for infinite sets), (iii) realizes the doublenegation shift
from proof theory (and allows us to extract programs from classical
proofs that use the axiom of countable choice). There will be several
examples in the languages Haskell and Agda.
A shorter version (coded in Haskell) appears in Andrej Bauer's blog.
In his blog, Bob Harper, in joint effort with Dave MacQueen and Lars Bergstrom, announces the launch of smlfamily.org:
The Standard ML Family project provides a home for online versions of various formal definitions of Standard ML, including the "Definition of Standard ML, Revised" (Standard ML 97). The site also supports coordination between different implementations of the Standard ML (SML) programming language by maintaining common resources such as the documentation for the Standard ML Basis Library and standard test suites. The goal is to increase compatibility and resource sharing between Standard ML implementations.
The site includes a history section devoted to the history of ML, and of Standard ML in particular. This section will contain a collection of original source documents relating to the design of the language.

Recent comments
15 hours 29 min ago
16 hours 27 min ago
1 day 9 hours ago
5 days 4 hours ago
5 days 10 hours ago
5 days 12 hours ago
5 days 22 hours ago
1 week 17 hours ago
1 week 2 days ago
1 week 2 days ago