Extensible Effects -- An Alternative to Monad Transformers

Extensible Effects -- An Alternative to Monad Transformers, by Oleg Kiselyov, Amr Sabry and Cameron Swords:

We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers.

Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types.

A follow-up to Oleg's delimited continuation adaptation of Cartwright and Felleisen's work on Extensible Denotational Language Specifications, which is a promising alternative means of composing effects to the standard monad transformers.

This work embeds a user-extensible effect EDSL in Haskell by encoding all effects into a single effect monad using a novel open union type and the continuation monad. The encoding is very similar to recent work on Algebraic Effects and Handlers, and closely resembles a typed client-server interaction ala coroutines. This seems like a nice convergence of the topics covered in the algebraic effects thread and other recent work on effects, and it's more efficient than monad transformers to boot.

Mixed-Site Variance

Ross Tate is calling for "Industry Endorsement" for his paper Mixed-Site Variance.

..this is an attempt to make industry experience admissible as evidence in academic settings, just like they do in industry settings.

Abstract:

Java introduced wildcards years ago. Wildcards were very expressive, and they were integral to updating the existing libraries to make use of generics. Unfortunately, wildcards were also complex and verbose, making them hard and inconvenient for programmers to adopt. Overall, while an impressive feature, wildcards are generally considered to be a failure. As such, many languages adopted a more restricted feature for generics, namely declaration-site variance, because designers believed its simplicity would make it easier for programmers to adopt. Indeed, declaration-site variance has been quite successful. However, it is also completely unhelpful for many designs, including many of those in the Java SDK. So, we have designed mixed-site variance, a careful combination of definition-site and use-site variance that avoids the failings of wildcards. We have been working with JetBrains to put this into practice by incorporating it into the design of their upcoming language, Kotlin. Here we exposit our design, our rationale, and our experiences.

Mention of it is also at Jetbrain's Kotlin blog.

Heap space analysis for garbage collected languages

Heap space analysis for garbage collected languages, by Elvira Albert, Samir Genaim, Miguel Gómez-Zamalloa:

Accurately predicting the dynamic memory consumption (or heap space) of programs can be critical during software development. It is well-known that garbage collection (GC) complicates such problem. The peak heap consumption of a program is the maximum size of the data on the heap during its execution, i.e., the minimum amount of heap space needed to safely run the program. Existing heap space analyses either do not take deallocation into account or adopt specific models of garbage collectors which do not necessarily correspond to the actual memory usage. This paper presents a novel static analysis for garbage collected imperative languages that infers accurate upper bounds on the peak heap usage, including exponential, logarithmic and polynomial bounds. A unique characteristic of the analysis is that it is parametric on the notion of object lifetime, i.e., on when objects become collectible.

Similar work has been covered here in the past.

The Three Laws of Programming Language Design

Joe Armstrong(of Erlang) while reviewing Elixir(Ruby like language that compiles to Erlang Virtual Machine) states his Three Laws of Programming Language Design.

  • What you get right nobody mentions.
  • What you get wrong, people bitch about.
  • What is difficult to understand you have to explain to people over and over again.

Some language get some things so right that nobody ever bothers to mention them, they are right, they are beautiful, they are easy to understand.

The wrong stuff is a bitch. You boobed, but you are forgiven if the good stuff outweighs the bad. This is the stuff you want to remove later, but you can’t because of backwards compatibility and some nitwit has written a zillion lines of code using the all the bad stuff.

The difficult to understand stuff is a real bummer. You have to explain it over and over again until you’re sick, and some people never get it, you have to write hundred of mails and thousands of words explaining over and over again why this stuff means and why it so. For a language designer, or author, this is a pain in the bottom.

On the history of the question of whether natural language is “illogical”

A nice essay from Barbara Partee on the origins of formal semantics of natural languages and Montague Grammar.

Not directly programming language material, the topic is likely to interest many here. I think several interesting previous discussions related to Montague can be found by searching the archives.

Terra: A low-level counterpart to Lua

A very interesting project developed by Zachary DeVito et al at Stanford University:

Terra is a new low-level system programming language that is designed to interoperate seamlessly with the Lua programming language:

-- This top-level code is plain Lua code.
print("Hello, Lua!")

-- Terra is backwards compatible with C
-- we'll use C's io library in our example.
C = terralib.includec("stdio.h")

-- The keyword 'terra' introduces
-- a new Terra function.
terra hello(argc : int, argv : &rawstring)
    -- Here we call a C function from Terra
    C.printf("Hello, Terra!\n")
    return 0
end

-- You can call Terra functions directly from Lua
hello(0,nil)

-- Or, you can save them to disk as executables or .o
-- files and link them into existing programs
terralib.saveobj("helloterra",{ main = hello })

Like C, Terra is a simple, statically-typed, compiled language with manual memory management. But unlike C, it is designed from the beginning to interoperate with Lua. Terra functions are first-class Lua values created using the terra keyword. When needed they are JIT-compiled to machine code.

Seems as if the target use case is high-performance computing. The team has also released a related paper, titled Terra: A Multi-Stage Language for High-Performance Computing:

High-performance computing applications, such as auto-tuners and domain-specific languages, rely on generative programming techniques to achieve high performance and portability. However, these systems are often implemented in multiple disparate languages and perform code generation in a separate process from program execution, making certain optimizations difficult to engineer. We leverage a popular scripting language, Lua, to stage the execution of a novel low-level language, Terra. Users can implement optimizations in the high-level language, and use built-in constructs to generate and execute high-performance Terra code. To simplify meta-programming, Lua and Terra share the same lexical environment, but, to ensure performance, Terra code can execute independently of Lua’s runtime. We evaluate our design by reimplementing existing multi-language systems entirely in Terra. Our Terra-based auto-tuner for BLAS routines performs within 20% of ATLAS, and our DSL for stencil computations runs 2.3x faster than hand-written C.

Lisp in Summer Projects

This summer, spend some quality time with your favorite technology in our 2013 summer programming contest!

The Lisp community is awarding prizes for demonstrating interesting and useful programs, technologies and art using any LISP-based technology.

Lisp, prizes, what's not to like?

Typesafe Activator

A new addition to the Typesafe Platform is Activator, a unique, browser-based tool that helps developers get started with Typesafe technologies quickly and easily. Getting started is a snap; just download, extract and run the executable to start building applications immediately via the easy to use wizard based interface. Common development patterns are presented through reusable templates that are linked to in-context tutorials which explain step-by-step exactly how things work. The Activator environment supports each stage of the application development lifecycle: Code, Compile, Run, and Test. At the appropriate time, Activator can generate fully-fledged projects for the leading IDE's so that application development can continue in these environments.

You can download Activator here.

Truth be told, the web site has too much hype and not enough details for my tastes. Had I not known about some of the technologies behind the Typesafe Platform I wouldn't go past the first page. Hopefully this side of things will be improved. People developing in Scala might want to share their experiences in the comments.

John C. Reynolds, 1935-2013

Randy Bryant, dean of the school of computer science at CMU, sent out an email saying that John C. Reynolds passed away yesterday.

Subject: In Memoriam. John Reynolds, June 1, 1935 - April 28, 2013
Date: Sun, 28 Apr 2013 21:45:12 -0400
From: Randy Bryant
To: scs-all@cs.cmu.edu

I'm sorry to announce that John Reynolds, a long-time member of our computer science faculty, passed away early this morning. Many of you know that John had been in declining health recently. We were able to celebrate his retirement him last summer. He had a heart attack last week and went downhill over a period of several days.

John got his PhD in 1961 in theoretical physics, but while working at Argonne National Laboratory came to realize that his passion was for computation. He became a very successful computer scientists, focusing on the logical foundations of programs and programming languages. He was at Syracuse University from 1970 to 1986 and then joined the CSD faculty.

John has made many important contributions over his career. Interestingly, his 2002 work on separation logic, done jointly with Peter O'Hearn and others, has been especially prominent. Separation logic provides a formal way to reason about what we might think of as "normal programs," i.e., ones that operate by changing the values stored in memory, but where memory is partitioned into independent blocks, and so we can reason about different program components independently. I can only hope that the work I do at age 67 would be counted among my best!

We will also remember John for this cheerful spirit, his high ethical standards, and his deep intellect. He will very much be missed.

Randy Bryant

It's probably impossible to overstate the impact that John had on the field of programming languages. But beyond being a great scholar, he was also a generous mentor and a fundamentally decent and kind human being. He will indeed very much be missed.

Teaching Garbage-Collection

Teaching garbage collection by implementing GCs can imply heavy curricular dependencies. We've worked at shrinking them so the material can be used in any number of contexts, and this material is being used by several universities that use PLAI. We have a pedagogic paper about our approach, which we've summarized in a blog post (with a link to the full paper).