Resource Polymorphism

Resource Polymorphism, by Guillaume Munch-Maccagnoni:

We present a resource-management model for ML-style programming languages, designed to be compatible with the OCaml philosophy and runtime model. This is a proposal to extend the OCaml language with destructors, move semantics, and resource polymorphism, to improve its safety, efficiency, interoperability, and expressiveness. It builds on the ownership-and-borrowing models of systems programming languages (Cyclone, C++11, Rust) and on linear types in functional programming (Linear Lisp, Clean, Alms). It continues a synthesis of resources from systems programming and resources in linear logic initiated by Baker.

It is a combination of many known and some new ideas. On the novel side, it highlights the good mathematical structure of Stroustrup's "Resource acquisition is initialisation" (RAII) idiom for resource management based on destructors, a notion sometimes confused with finalizers, and builds on it a notion of resource polymorphism, inspired by polarisation in proof theory, that mixes C++'s RAII and a tracing garbage collector (GC).
The proposal targets a new spot in the design space, with an automatic and predictable resource-management model, at the same time based on lightweight and expressive language abstractions. It is backwards-compatible: current code is expected to run with the same performance, the new abstractions fully combine with the current ones, and it supports a resource-polymorphic extension of libraries. It does so with only a few additions to the runtime, and it integrates with the current GC implementation. It is also compatible with the upcoming multicore extension, and suggests that the Rust model for eliminating data-races applies.

Interesting questions arise for a safe and practical type system, many of which have already been thoroughly investigated in the languages and prototypes Cyclone, Rust, and Alms.

An ambitious goal, and it would be an incredibly useful addition to OCaml. Might even displace Rust in some places, since you can theoretically avoid triggering the GC, but you have the excellent GC available when needed. This is definitely a pain point for Rust.

How efficient is partial sharing?

Partial sharing graphs offer a reduction model for the lambda calculus that is optimal in a sense put forward by Jean Jacques Levy. This model has seen interest wax and wane: initially it was thought to offer the most efficient possible technology for implementing the lambda calculus, but then an important result showed that bookkeeping overheads of any such model could be very high (Asperti & Mairson 1998). This result had a chilling effect on the initial wave of excitement over the technology.

Now Stefano Guerrini, one of the early investigators of partial sharing graphs, has an article with Marco Solieri (Guerrini & Solieri 2017) arguing that the gains from optimality can be very high and that partial sharing graphs can be relatively close to the best possible efficiency, within a quadratic factor on a conservative analysis (this is relatively close in terms of elementary recursion). Will the argument and result lead to renewed interest in partial sharing graphs from implementors of functional programming? We'll see...

(Asperti & Mairson 1998) Parallel beta reduction is not elementary recursive.

(Guerrini & Solieri 2017) Is the Optimal Implementation inefficient? Elementarily not.

Is Haskell the right language for teaching functional programming principles?

No! (As Simon Thompson explains.)
You cannot not love the "exploration of the length function" at the bottom. Made me smile in the middle of running errands.

A unified approach to solving seven programming problems

A fun pearl by William E. Byrd, Michael Ballantyne, Gregory Rosenblatt, and Matthew Might from ICFP: seven programming challenges solved (easily!) using a relational interpreter. One challenge, for example, is to find quines. Another is to find programs that produce different results with lexical vs. dynamic scope.

The interpreter is implemented in miniKanren (of course), inside Racket (of course).

ICFP 2017 live streaming

If you are one of the few LtU-ers not presenting (just kidding...), you can get your functional programming fix by following the live stream from ICFP, starting tomorrow at 11:00 (UK). Want to know when to tune in? Check out the incredible program!

Proceedings of the ACM on Programming Languages

Proceedings of the ACM on Programming Languages (PACMPL) is a Gold Open Access journal publishing research on all aspects of programming languages, from design to implementation and from mathematical formalisms to empirical studies. Each issue of the journal is devoted to a particular subject area within programming languages and will be announced through publicized Calls for Papers.

See the ToC of the September 2017, ICFP issue, here. Some very cool stuff.


Review of Graham Hutton's Programming in Haskell, 2e

A concise review by Simon Thompson of the second edition of Graham Hutton's Programming in Haskell. The first edition was published in 2007, but chapters were written earlier, and the review focuses on how the language has changed since then, embracing the "categorical / algebraic approach more fully".

RustBelt: Securing the Foundations of the Rust Programming Language

RustBelt: Securing the Foundations of the Rust Programming Language by Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, Derek Dreyer:

Rust is a new systems programming language that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust’s safety claims have been formally proven, and there is good reason to question whether they actually hold. Specifically, Rust employs a strong, ownership-based type system, but then extends the expressive power of this core type system through libraries that internally use unsafe features. In this paper, we give the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem.

Rust is definitely pushing the envelope in a new direction, but there's always a little wariness around using libraries that make use of unsafe features, since "safety with performance" is a main reason people want to use Rust. So this is a great step in the right direction!

YOW! Lambda Jam 2017: John Hughes - Why Functional Programming Matters

Why FP still matters (video)...

27 years ago I published “Why Functional Programming Matters”, a manifesto for FP–but the subject is much older than that! In this talk I’ll take a deep dive into its history, highlighting some of the classic papers of the subject, personal favourites, and some of my own work. At the end of the day, four themes emerge that characterize what I love about the subject.

Type Systems as Macros

Type Systems as Macros, by Stephen Chang, Alex Knauth, Ben Greenman:

We present TURNSTILE, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. TURNSTILE critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, TURNSTILE produces performant typed embedded languages with little effort.

This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details available here, and here's a blog post where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to Active Libraries.

Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F.

This work may also conceptually dovetail with another thread discussing fexprs and compilation.

XML feed