Functional

Extensible Effects -- An Alternative to Monad Transformers

Extensible Effects -- An Alternative to Monad Transformers, by Oleg Kiselyov, Amr Sabry and Cameron Swords:

We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers.

Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types.

A follow-up to Oleg's delimited continuation adaptation of Cartwright and Felleisen's work on Extensible Denotational Language Specifications, which is a promising alternative means of composing effects to the standard monad transformers.

This work embeds a user-extensible effect EDSL in Haskell by encoding all effects into a single effect monad using a novel open union type and the continuation monad. The encoding is very similar to recent work on Algebraic Effects and Handlers, and closely resembles a typed client-server interaction ala coroutines. This seems like a nice convergence of the topics covered in the algebraic effects thread and other recent work on effects, and it's more efficient than monad transformers to boot.

How OCaml type checker works -- or what polymorphism and garbage collection have in common

How OCaml type checker works -- or what polymorphism and garbage collection have in common

There is more to Hindley-Milner type inference than the Algorithm W. In 1988, Didier Rémy was looking to speed up the type inference in Caml and discovered an elegant method of type generalization. Not only it is fast, avoiding the scan of the type environment. It smoothly extends to catching of locally-declared types about to escape, to type-checking of universals and existentials, and to implementing MLF.

Alas, both the algorithm and its implementation in the OCaml type checker are little known and little documented. This page is to explain and popularize Rémy's algorithm, and to decipher a part of the OCaml type checker. The page also aims to preserve the history of Rémy's algorithm.

The attraction of the algorithm is its insight into type generalization as dependency tracking -- the same sort of tracking used in automated memory management such as regions and generational garbage collection. Generalization can be viewed as finding dominators in the type-annotated abstract syntax tree with edges for shared types. Fluet and Morrisett's type system for regions and MetaOCaml environment classifiers use the generalization of a type variable as a criterion of region containment. Uncannily, Rémy's algorithm views the region containment as a test if a type variable is generalizable.

As usual with Oleg, there's a lot going on here. Personally, I see parallels with "lambda with letrec" and "call-by-push-value," although making the connection with the latter takes some squinting through some of Levy's work other than his CBPV thesis. Study this to understand OCaml type inference and/or MLF, or for insights into region typing, or, as the title suggests, for suggestive analogies between polymorphism and garbage collection.

Visi.io

Visi.io comes from David Pollak and aims at revolutionizing building tablet apps, but the main attraction now seems to be in exploring the way data flow and cloud computing can be integrated. The screencast is somewhat underwhelming but at least convinces me that there is a working prototype (I haven't looked further than the website as yet). The vision document has some nice ideas. Visi.io came up recently in the discussion of the future of spreadsheets.

Milner Symposium 2012

The Milner Symposium 2012 was held in Edinburgh this April in memory of the late Robin Milner.

The Milner Symposium is a celebration of the life and work of one of the world's greatest computer scientists, Robin Milner. The symposium will feature leading researchers whose work is inspired by Robin Milner.

The programme consisted of academic talks by colleagues and past students. The talks and slides are available online.

I particularly liked the interleaving of the personal and human narrative underlying the scientific journey. A particularly good example is Joachim Parrow's talk on the origins of the pi calculus. Of particular interest to LtU members is the panel on the future of functional programming languages, consisting of Phil Wadler, Xavier Leroy, David MacQueen, Martin Odersky, Simon Peyton-Jones, and Don Syme.

CUFP 2012 Tutorials

If you are in Denmark in September you should attend. If not, raise a glass to salute what is now abundantly clear: FPLs are conquering the ("real") world.

Koka a function oriented language with effect inference

Koka is a function-oriented programming language that seperates pure values from side-effecting computations, where the effect of every function is automatically inferred. Koka has many features that help programmers to easily change their data types and code organization correctly, while having a small language core with a familiar JavaScript like syntax.

Koka extends the idea of using row polymorphism to encode an effect system and the relations between them. Daan Leijen is the primary researcher behind it and his research was featured previously on LtU, mainly on row polymorphism in the Morrow Language.

So far there's no paper available on the language design, just the slides from a Lang.Next talk (which doesn't seem to have video available at Channel 9), but it's in the program for HOPE 2012.

Mechanized λ<sub>JS</sub>

Mechanized λJS
The Brown PLT Blog, 2012-06-04

In an earlier post, we introduced λJS, our operational semantics for JavaScript. Unlike many other operational semantics, λJS is no toy, but strives to correctly model JavaScript's messy details. To validate these claims, we test λJS with randomly generated tests and with portions of the Mozilla JavaScript test suite.

Testing is not enough. Despite our work, other researchers found a missing case in λJS. Today, we're introducing Mechanized λJS, which comes with a machine-checked proof of correctness, using the Coq proof assistant.

More work on mechanizing the actual, implemented semantics of a real language, rather than a toy.

How to Make Ad Hoc Proof Automation Less Ad Hoc

How to Make Ad Hoc Proof Automation Less Ad Hoc
Georges Gonthier, Beta Ziliani, Aleksandar Nanevski, and Derek Dreyer, to appear in ICFP 2011

Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover’s base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself.

We propose a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coq’s own type system. Our approach involves a sophisticated application of Coq’s canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependently-typed logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. We present a series of design patterns for canonical structure programming that enable one to carefully and predictably coax Coq’s type inference engine into triggering the execution of user-supplied algorithms during unification, and we illustrate these patterns through several realistic examples drawn from Hoare Type Theory. We assume no prior knowledge of Coq and describe the relevant aspects of Coq type inference from first principles.

If you've ever toyed with Coq but run into the difficulties that many encounter in trying to construct robust, comprehensible proof scripts using tactics, which manipulate the proof state and can leave you with the "ground" of the proof rather than the "figure," if you will, in addition to being fragile in the face of change, you may wish to give this a read. It frankly never would have occurred to me to try to turn Ltac scripts into lemmas at all. This is much more appealing than most other approaches to the subject I've seen.

Validating LR(1) parsers

Validating LR(1) parsers

An LR(1) parser is a finite-state automaton, equipped with a stack, which uses a combination of its current state and one lookahead symbol in order to determine which action to perform next. We present a validator which, when applied to a context-free grammar G and an automaton A, checks that A and G agree. Validating the parser provides the correctness guarantees required by verified compilers and other high-assurance software that involves parsing. The validation process is independent of which technique was used to construct A. The validator is implemented and proved correct using the Coq proof assistant. As an application, we build a formally-verified parser for the C99 language.

I've always been somewhat frustrated, while studying verified compiler technology, that the scope of the effort has generally been limited to ensuring that the AST and the generated code mean the same thing, as important as that obviously is. Not enough attention has been paid, IMHO, to other compiler phases. Parsing: The Solved Problem That Isn't does a good job illuminating some of the conceptual issues that arise in attempting to take parsers seriously as functions that we would like to compose etc. while maintaining some set of properties that hold of the individuals. Perhaps this work can shed some light on possible solutions to some of those issues, in addition to being worthwhile in its own right. Note the pleasing presence of an actual implementation that's been used on the parser of a real-world language, C99.

Programming with Algebraic Effects and Handlers

Programming with Algebraic Effects and Handlers. Andrej Bauer and Matija Pretnar, arXiv preprint.

Eff is a programming language based on the algebraic approach to computational effects, in which effects are viewed as algebraic operations and effect handlers as homomorphisms from free algebras. Eff supports first-class effects and handlers through which we may easily define new computational effects, seamlessly combine existing ones, and handle them in novel ways. We give a denotational semantics of eff and discuss a prototype implementation based on it. Through examples we demonstrate how the standard effects are treated in eff, and how eff supports programming techniques that use various forms of delimited continuations, such as backtracking, breadth-first search, selection functionals, cooperative multi-threading, and others.

Eff has been discussed here before, and it's nice to see some more progress and a much more complete introduction. The paper is intended for a general audience (well, a general audience of PL enthusiasts). It's quite clear and contains a wealth of examples.

XML feed