LtU Forum

Let's kick continuations around for a while...

Recently Rys McCusker and I started talking about continuations, in another node where that was probably off topic. The link if you want to read it is http://lambda-the-ultimate.org/node/5082.

The ability to directly refer to continuations is supposed to be something like an "ultimate" control structure, or "goto with its context intact," which all other control structures are trivially implementable with.

But because it does a lot more than "goto" in order to preserve that context, there can be some serious problems if that extra stuff it does is irrelevant, wasteful, or harmful. And I have experienced that some kinds of continuations are just plain worse than others. But let me explain what I mean by kinds of continuations. Most languages that have them at all have only one kind, so the fact that there are different kinds is something that people who only program in one language tend never to notice.

There are a bunch of different things that can be called continuations. I think of them as being divided along four axes. First, there is delimited versus reified. Second, there is lexical-capturing versus dynamic-capturing (or both, or neither). Third there is winding/unwinding versus raw. And fourth there is mutable vs. immutable capture.

Delimited or "one-shot" continuations are delimited in dynamic extent; You can store them in values or pass them to subroutines or whatever, but having called a function, you can its continuation to return from that function exactly once, and that only if you haven't already returned from the function in some other way first. With delimited continuations you can reorder the sequence in which functions return, making it not equal to the usual standard of being the reverse of sequence in which they were called. You can even "delay" some of the function returns until after the program halts (ie, never) by just never using them, or by having your function just skip the returns of its callees by telling them to call its own continuation directly. These are mostly no problem in my experience.

Reified continuations can do the same kind of things that you can do with delimited continuations in terms of returning from the function or passing one to a called function or reordering function returns, but reified continuations have an additional property: having captured a reified continuation and stored it as a value in some variable that persists beyond the dynamic extent of the function (ie, after the function has already returned) you can call it and simulate that function returning, with the same environment (that may include the environments of other functions that have already returned) AGAIN, even though the function whose return you are simulating has not been called a second time. This sounds like a good idea, but experience has taught me that it is deeply problematic.

Lexical-capture vs. Dynamic-capture is whether your continuations capture the lexical environment (ie, the bindings within the lexical scope of the function whose continuation is taken) or the dynamic environment (ie, the bindings within the dynamic extent of the function whose continuation is taken). Usually, programming languages use scope, which is called lexical scope, or extent, which is called dynamic scope, but rarely both. In those two cases this is a trivial choice because your continuations will capture the same environment whose bindings are extended by your function calls (it could be done the other way but that would be insane). However, if your language has both lexically scoped and dynamically scoped variables, this distinction can become crucially important. Usually continuations that capture the lexical environment will allow the dynamic environment to be inherited from wherever the continuation is called, rather than from where the continuation is taken. You can make powerful arguments on both sides of the question of whether continuations that capture the lexical environment should or should not also capture the dynamic environment. Semantically it's nicer if they do, but in terms of implementation that doubles the branchiness of your environment tree and can become very expensive very fast if people use it a lot. In Common Lisp version 1, before they switched to lexical scope by default, there were (delimited) continuations that captured the dynamic environment but not the lexical environment. I guess there is actually a fourth subcategory here: "goto" is a sort of continuation that captures neither lexical nor dynamic environment.

Winding and Unwinding is another important distinction, and another controversy. If your continuations wind and unwind, it means that you can arrange for code to be called when a given scope (or extent) is entered or left, including entering or leaving that scope by means of a continuation. A classic example would be releasing a resource when you escape from a function that needs that resource, and then locking the resource again when re-entering that function. With raw continuations, it's a lot more like just a jump; you don't get to call set-up and shut-down code. In some ways winding and unwinding continuations are more orderly, but once again the devil is in the implementation; this renders continuations nearly useless for mechanisms like lightweight context switches between green threads.

And finally we get to the last category; is the captured environment mutable or immutable after capture? An implementation of continuations could copy the stack, and then treat the copied value as immutable. If the bindings visible in the stack were then mutated before the continuation were called, the stack from the continuation (including the former values) would be copied back into the working stack, undoing the mutations. This type of continuation is called a "restart" and is very rare because it is highly expensive in memory costs. In the usual case, the captured environment is mutable - ie, the implementation records only pointers to the stack frames where the variables are found, not the variables themselves. So if code after the continuation is captured mutates the values of those variables, the continuation when invoked will not overwrite the mutated values.

And I think that is an exhaustive taxonomy of all the things that have been called "continuations." Doubtless I've missed something.

Continuations are used to implement:
control structures - for example if you want to roll your own do-while loops, no environment capture is needed.
nonlocal flow of control - for example if you want to roll your own try/catch and exception handlers.
variable scoping - using the winding/unwinding variety, you can simulate dynamic variables in lexically scoped languages or vice versa.
green threads - if you want to roll your own cooperative multitasking.
session management - you can represent the session of visitor #1392 to your website (or whatever) as a continuation.
type system extensions - using winding/unwinding continuations can allow you to add typechecking code on entry and exit to functions, even for types the native language could not define. The drawback is that these are runtime rather than compile time typechecks.
and many other things.

Many of these things can't be done within the (other) tools a well-defined language can give you to work with. So, yay.

But many of these are BAD uses for continuations, in that continuations are a BAD way to implement these things. So, boo.

Many of these things cannot coexist in the same program because different uses of continuations would force stack captures that can never be recovered by garbage collection, or otherwise interfere with each other. This is especially true of reified continuations. So, boo some more.

Many of these different "continuations" have very fundamental semantic differences, in terms of composability and tractability for combinations.

And now that I've written half a book here, you guys can tell me what I missed or what it means. ;-)

Wrangling C via Static Checks

If you can't beat them, join them? Static Program Analyzer for Safety-Critical Real-Time Embedded Software which is the kind of thing I search around for; my 'real time' app is video games. :-)

We report on a successful preliminary experience in the design and implementation of a special-purpose Abstract Interpretation based static program analyzer for the verification of safety critical embedded real-time software. The analyzer is both precise (zero false alarm in the considered experiment) and efficient (less than one minute of analysis for 10,000 lines of code). Even if it is based on a simple interval analysis, many features have been added to obtain the desired precision: expansion of small arrays, widening with several thresholds, loop unrolling, trace partitioning, relations between loop counters and other variables. The efficiency of the tool mainly comes from a clever representation of abstract environments based on balanced binary search trees.

In this paper, we report on a first experience on the design of a special-purpose static program analyzer. The considered class of software is critical real-time synchronous embedded software written in a subset of C. The considered class of specifications is that of absence of run-time error. This experience report explains the crucial design decisions and dead-ends that lead from a too imprecise and too slow initial implementation of a general-purpose static program analyzer to a completely rewritten, very fast, and extremely precise special-purpose static program analyzer. By providing details on the design and implementation of this static analyzer as well as on its precision (absence of false alarm), execution time, and memory usage, we prove that the approach is technically and economically viable.

Ivory EDSL in Haskell for Embedded Control

I'm all starry-eyed enamoured with Ivory and Tower.

We report on our experiences in synthesizing a fully-featured autopilot from embedded domain-specific languages (EDSLs) hosted in Haskell. The autopilot is approximately 50k lines of C code generated from 10k lines of EDSL code and includes control laws, mode logic, encrypted communications system, and device drivers. The autopilot was built in less than two engineer years. This is the story of how EDSLs provided the productivity and safety gains to do large-scale low-level embedded programming and lessons we learned in doing so.

Ok for me what I'm excited about: Ivory is like Atom or Copilot only more real-world; Tower is a compositional language to boot; apparently there is an ability to do FFI from Ivory to C. (Now if only I could understand more about any kind of interactive source line debuggery.) I really want to write video games for my Game Boy using this stuff.

Language combining relational algebra and domain algebra

I'm working on a language (and implementation) that combines elements of relational algebra and domain algebra. For clarity, the relational algebra is about operations with relations as arguments, such as join, union and 'is subset of'. The domain algebra is about iterated tuple-level operations used to select, generate or modify relations, such as '>=', '+' and SUM().

The purpose is similar to the Functional Relational Programming described but not implemented in Out of the Tar Pit.

I have a bibliography of papers and products for some 25 or so relational languages spanning over 30 years, including the Third Manifesto and Tutorial-D, Rel, Alf, Muldis, SIRA-PRISE, BS12, Dataphor, etc. They all emphasise the relational side, and while most have a domain algebra component they do not treat it in any detail. I'm looking for more material on the domain side.

I know of one set of works directly on the subject: Aldat. I would appreciate any links or pointers for additional works in this area, preferably active now or in the recent past.

Intuitionistic Programming Language

The Intuitionistic Programming Language purports to do nifty things:

The semantics of IPL is based on intuitionistic type theory extended with a component model it compiles to LLVM bytecode using a novel algorithm based on term rewriting. The IPL compiler is very small: less than 10,000 lines of OCaml code.

IPL provides

zero abstraction penalty,
a very high level of abstraction,
excellent run time performance,
strong static safety guarantees,
predictable performance without GC pauses,
support for verification and testing, and
a composable effect system based on free monads.

The combination of these features is made possible by an algorithm that eliminates high level constructs at compile time. The limitation implied by this algorithm is that mutable complex state has to be region based (global or local).

IPL is currently in the early stages of development and there are many opportunities to make a large impact on its future direction and for further innovation in the space.

SML# targets LLVM

I will always have a soft spot in my (otherwise cold, desolate, inchoate) heart for SML.

Their main page is unreachable for me just now (you can get e.g. the google cached version of it still if you like), but their last announcement was in April of this year (2014) so I hope the project is still alive.

SML# is an extension of Standard ML with practically important features, including record polymorphism, seamless interoperability with C, true separate compilation and linking, and native multithread support on multicore CPUs.

The most notable change in SML# version 2 is that the SML# compiler now works with the LLVM Compiler Infrastructure. The new SML# compiler compiles SML# code including all of the above features to LLVM IR code and produces native code through LLVM. More than half of compilation phases and library modules has been rewritten for the LLVM support. These changes also greatly speed up the compilation processes.

The major difficulties we have overcome in SML#-LLVM codegen is the treatment of polymorphism and separate compilation with SML#'s native ("unboxed") data representations. This aspect requires both a certain amount of additional type theoretical development specific to LLVM target and careful constructions of LLVM codegen. We hope to report this somewhere.

Integrating Dependent and Linear Types

This wasn't posted yet, that I could find. Sorry if this is a dupe. Neelk doesn't self-promote, I guess. :-)

Abstract

In this paper, we show how to integrate linear types with type dependency, by extending the linear/non-linear calculus of Benton to support type dependency. Next, we give an application of this calculus by giving a proof-theoretic account of imperative programming, which requires extending the calculus with computationally irrelevant quantification, proof irrelevance, and a monad of computations. We show the soundness of our theory by giving a realizability model in the style of Nuprl, which permits us to validate not only the B laws for each type, but also the n laws.

These extensions permit us to decompose Hoare triples into a collection of simpler type-theoretic connectives, yielding a rich equational theory for dependently-typed higher-order imperative programs. Furthermore, both the type theory and its model are relatively simple, even when all of the extensions are considered.

Personally I find the abstract a little over my head, but am excited when I read stuff like, "We would like to fully integrate dependent and linear type theory. Linear type theory would permit us to extend the Curry-Howard correspondence to (for example) imperative programs, and type dependency permits giving very precise types to describe the precise functional behavior of those programs."

Funny how the 'Fair Reactive' paper was just recently mentioned on the ATS list.

Chocolate & Peanut-butter: Google Blockly & Behavior Threads

My day job involves GUI work on smartphones (pity me) so I like finding out about things like this:

Abstract
We combine visual programming using Google Blockly with a single-threaded implementation of behavioral programming (BP) in JavaScript, and propose design patterns for developing reactive systems such as client-side Web applications and smartphone customization applications as collections of independent cross-cutting scenarios that are interwoven at run time. We show that BP principles can be instrumental in addressing common software engineering issues such as separation of graphical representation from logic and the handling of inter-object scenarios. We also show that a BP infrastructure can be implemented with limited run-time resources in a single-threaded environment using coroutines. In addition to expanding the availability of BP capabilities, we hope that this work will contribute to the evolving directions of technologies and design patterns in developing interactive applications.

Future of Programming Videos, final program

The final program from the SPLASH Future of Programming workshop is up. A lot of great videos with a lot of great ideas for the future of our field!

Contents with brief one liners (see the program page for more detailed abstracts):

Antha programming system for Bioinformatics

Antha is a high-level language for biology, making it easy to rapidly compose reproducible work flows using individually testable and reusable Antha Elements.

Recently made open source, the programming system Antha uses flow based programming. They have set up a bioinformatics related set of types in said system.

(I have nothing to do with the project, haven't even used it, but sure wish I could get paid to do so. :-)

XML feed