Implementation

Dependently Typed Programming based on Automated Theorem Proving

Dependently Typed Programming based on Automated Theorem Proving, by Alasdair Armstrong, Simon Foster, and Georg Struth. [Link to preprint on ArXiv, a.k.a. this has not yet been refereed, use at your own risk].

Mella is a minimalistic dependently typed programming language and interactive theorem prover implemented in Haskell. Its main purpose is to investigate the effective integration of automated theorem provers in a pure and simple setting. Such integrations are essential for supporting program development in dependently typed languages. We integrate the equational theorem prover Waldmeister and test it on more than 800 proof goals from the TPTP library. In contrast to previous approaches, the reconstruction of Waldmeister proofs within Mella is quite robust and does not generate a significant overhead to proof search. Mella thus yields a template for integrating more expressive theorem provers in more sophisticated languages.

Coq and Agda are demonstrating the dependently-typed programming is feasible and beneficial -- but still quite painful in practice. The point of computers is that they can automate a lot of drudgery. And a lot of proofs ought to be considered drudgery as well. But, in practice, this is a huge leap. The authors present an interesting experiment in a promising direction.

The LtU angle here is that current (automated) proof assistants generate proofs which, usually, have a huge impedance mismatch with the kinds of evidence that a type-checker for a dependently-typed language needs to be convinced of the validity of some user code. So there is a non-trivial engineering issue to be solved regarding the implementation of a pleasant environment for dependently-typed programming.

A Monadic Framework for Delimited Continuations

A Monadic Framework for Delimited Continuations (PDF), R. Kent Dybvig, Simon Peyton Jones, Amr Sabry. TR, June 2005.

Delimited continuations are more expressive than traditional abortive continuations and they apparently seem to require a framework beyond traditional continuation-passing style (CPS). We show that this is not the case: standard CPS is sufficient to explain the common control operators for delimited continuations. We demonstrate this fact and present an implementation as a Scheme library. We then investigate a typed account of delimited continuations that makes explicit where control effects can occur. This results in a monadic framework for typed and encapsulated delimited continuations which we design and implement as a Haskell library.

A fascinating paper about delimited control. I'm very much a newbie to delimited control, but this paper has been enormously helpful - despite the title. ;)

The basic idea of the paper is to represent the execution context as a sequence containing prompts (control delimiters) and the (partial) continuations between prompts. This model is formalized with an operational semantics, which was insightful even though it's the first operational semantics I've studied.

The authors then present an implementation of the model in terms of call/cc in Scheme. The basic idea here is to always perform user code after aborting to a context near the bottom of the stack, just above a call to an underflow function - this means that even though we use undelimited call/cc, we only ever capture our (small, partial) execution context. The whole execution context (the "metacontinuation") is maintained as a sequence data structure in a global variable (basically, a list containing prompts and Scheme continuations). The underflow function destructures the metacontinuation, and executes (returns to) the partial continuations stored in it. Pushing a prompt adds a delimiter to the metacontinuation, capturing a delimited continuation splits the metacontinuation at a delimiter, and composing a continuation appends to the metacontinuation.

I haven't even gotten to the later parts of the paper yet, but this model and the Scheme implementation alone is worth a look.

(The paper seems to be a reworked version of A Monadic Framework for Subcontinuations, discussed previously.)

Lightweight Monadic Programming in ML

Lightweight Monadic Programming in ML

Many useful programming constructions can be expressed as monads. Examples include probabilistic modeling, functional reactive programming, parsing, and information flow tracking, not to mention effectful functionality like state and I/O. In this paper, we present a type-based rewriting algorithm to make programming with arbitrary monads as easy as using ML's built-in support for state and I/O. Developers write programs using monadic values of type M t as if they were of type t, and our algorithm inserts the necessary binds, units, and monad-to-monad morphisms so that the program type checks. Our algorithm, based on Jones' qualified types, produces principal types. But principal types are sometimes problematic: the program's semantics could depend on the choice of instantiation when more than one instantiation is valid. In such situations we are able to simplify the types to remove any ambiguity but without adversely affecting typability; thus we can accept strictly more programs. Moreover, we have proved that this simplification is efficient (linear in the number of constraints) and coherent: while our algorithm induces a particular rewriting, all related rewritings will have the same semantics. We have implemented our approach for a core functional language and applied it successfully to simple examples from the domains listed above, which are used as illustrations throughout the paper.

This is an intriguing paper, with an implementation in about 2,000 lines of OCaml. I'm especially interested in its application to probabilistic computing, yielding a result related to Kiselyov and Shan's Hansei effort, but without requiring delimited continuations (not that there's anything wrong with delimited continuations). On a theoretical level, it's nice to see such a compelling example of what can be done once types are freed from the shackle of "describing how bits are laid out in memory" (another such compelling example, IMHO, is type-directed partial evaluation, but that's literally another story).

Levy: a Toy Call-by-Push-Value Language

Andrej Bauer's blog contains the PL Zoo project. In particular, the Levy language, a toy implementation of Paul Levy's CBPV in OCaml.

If you're curious about CBPV, this implementation might be a nice accompaniment to the book, or simply a hands on way to check it out.

It looks like an implementation of CBPV without sum and product types, with complex values, and without effects. I guess a more hands-on way to get to grips with CBPV would be to implement any of these missing features.

The posts are are 3 years old, but I've only just noticed them. The PL Zoo project was briefly mentioned here.

Specification and Verification: The Spec# Experience

Mike Barnett, Manuel Fähndrich, K. Rustan M. Leino, Peter Müller, Wolfram Schulte, and Herman Venter, Specification and Verification: The Spec# Experience" Preprint of an article appearing in the June 2011 CACM.

CACM tagline: Can a programming language really help programmers write better programs?

Spec# is a programming system that facilitates the development of correct software. The Spec# language extends C# with contracts that allow programmers to express their design intent in the code. The Spec# tool suite consists of a compiler that emits run-time checks for contracts, a static program verifier that attempts to mathematically prove the correctness of programs, and an integration into the Visual Studio development environment. Spec# shows how contracts and verifiers can be integrated seamlessly into the software development process. This paper reflects on the six-year history of the Spec# project, scientific contributions it has made, remaining challenges for tools that seek to establish program correctness, and prospects of incorporating verification into everyday software engineering.

Spec# is, in some ways, quite similar to JML+ESC/Java2. But Spec# is a language rather than a set of annotations, which allows it to incorporate features such as a non-null type system and a very tight integration with the IDE.

Spec# was previously mentioned on LtU back in 2005.

Passing a Language through the Eye of a Needle

Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes, "Passing a Language through the Eye of a Needle: How the embeddability of Lua impacted its design", ACM Queue vol. 9, no. 5, May 2011.

A key feature of a scripting language is its ability to integrate with a system language. This integration takes two main forms: extending and embedding. In the first form, you extend the scripting language with libraries and functions written in the system language and write your main program in the scripting language. In the second form, you embed the scripting language in a host program (written in the system language) so that the host can run scripts and call functions defined in the scripts; the main program is the host program.
...
In this article we discuss how embeddability can impact the design of a language, and in particular how it impacted the design of Lua from day one. Lua is a scripting language with a particularly strong emphasis on embeddability. It has been embedded in a wide range of applications and is a leading language for scripting games.

An interesting discussion of some of the considerations that go into supporting embeddability. The design of a language clearly has an influence over the API it supports, but conversely the design of an API can have a lot of influence over the design of the language.

One Pass Real-Time Generational Mark-Sweep Garbage Collection

In One Pass Real-Time Generational Mark-Sweep Garbage Collection Joe Armstrong and Robert Virding talk about a very simple garbage collector used in Erlang*.

Traditional mark-sweep garbage collection algorithms do not allow reclamation of data until the mark phase of the algorithm has terminated. For the class of languages in which destructive operations are not allowed we can arrange that all pointers in the heap always point backwards towards "older" data. In this paper we present a simple scheme for reclaiming data for such language classes with a single pass mark-sweep collector.

We also show how the simple scheme can be modified so that the collection can be done in an incremental manner (making it suitable for real-time collection). Following this we show how the collector can be modified for generational garbage collection, and finally how the scheme can be used for a language with concurrent processes.

Unfortunately it's very restrictive. In particular even the "hidden" destructive updates used in a call-by-need language are problematic for this kind of collector.

* I'm not sure whether the described collector is still used in Erlang.

Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit

Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit. Makarius Wenzel, UITP 2010.

After several decades, most proof assistants are still centered around TTY-based interaction in a tight read-eval-print loop. Even well-known Emacs modes for such provers follow this synchronous model based on single commands with immediate response, meaning that the editor waits for the prover after each command. There have been some attempts to re-implement prover interfaces in big IDE frameworks, while keeping the old interaction model. Can we do better than that?

Ten years ago, the Isabelle/Isar proof language already emphasized the idea of proof document (structured text) instead of proof script (sequence of commands), although the implementation was still emulating TTY interaction in order to be able to work with the then emerging Proof General interface. After some recent reworking of Isabelle internals, to support parallel processing of theories and proofs, the original idea of structured document processing has surfaced again.

Isabelle versions from 2009 or later already provide some support for interactive proof documents with asynchronous checking, which awaits to be connected to a suitable editor framework or full-scale IDE. The remaining problem is how to do that systematically, without having to specify and implement complex protocols for prover interaction.

This is the point where we introduce the new Isabelle/Scala layer, which is meant to expose certain aspects of Isabelle/ML to the outside world. The Scala language (by Martin Odersky) is sufficiently close to ML in order to model well-known prover concepts conveniently, but Scala also runs on the JVM and can access existing Java libraries directly. By building more and more external system wrapping for Isabelle in Scala, we eventually reach the point where we can integrate the prover seamlessly into existing IDEs (say Netbeans).

To avoid getting side-tracked by IDE platform complexity, our current experiments are focused on jEdit, which is a powerful editor framework written in Java that can be easily extended by plugin modules. Our plugins are written again in Scala for our convenience, and to leverage the Scala actor library for parallel and interactive programming. Thanks to the Isabelle/Scala layer, the Isabelle/jEdit implementation is very small and simple.

I thought this was a nice paper on the pragmatics of incremental, interactive proof editing. I've suspected for a while that as programming languages and IDEs grow more sophisticated and do more computationally-intensive checks at compile time (including but not limited to theorem proving), it will become similarly important to design our languages to support modular and incremental analysis.

However, IDE designs also need more experimentation, and unfortunately the choice of IDEs to extend seem to be limited to archaic systems like Emacs or industrial behemoths like Eclipse or Visual Studio, both of which constrain the scope for new design -- Emacs is too limited, and the API surface of Eclipse/VS is just too big and irregular. (Agda-mode for Emacs is a heroic but somewhat terrifying piece of elisp.)

Kona

Kona is a new open-source implementation of Arthur Whitney's K, an ASCII-based APL like language. Kona is a fully working version of K3.

If you haven't ever tried APL/J/K or ilk you might find this language incomprehensible at first -- unless you like a challenge! Watch the screencasts or read some of our earlier APL/J stories.

Regardless of your interest in K, any LtUer worth his salt will enjoy the source code. We wrote a bit about the history of the remarkable C coding style used in the past, but I can't locate the link at the moment.

Finding and Understanding Bugs in C Compilers

In Finding and Understanding Bugs in C Compilers Xuejun Yang, Yang Chen, Eric Eide, and John Regehr of University of Utah, School of Computing describe Csmith, a fuzzer for testing C compilers. The hard part was avoiding undefined behavior.

Compilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to find compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our first contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undefined and unspecified behaviors that would destroy its ability to automatically find wrong code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers.

Two bits really stuck out for me. First, formal verification has a real positive impact

The striking thing about our CompCert results is that the middleend bugs we found in all other compilers are absent. As of early 2011, the under-development version of CompCert is the only compiler we have tested for which Csmith cannot find wrong-code errors. This is not for lack of trying: we have devoted about six CPU-years to the task. The apparent unbreakability of CompCert supports a strong argument that developing compiler optimizations within a proof framework, where safety checks are explicit and machine-checked, has tangible benefits for compiler users.

And second, code coverage is inadequate for ensuring good test thoroughness for software as complex as a compiler.

Because we find many bugs, we hypothesized that randomly generated programs exercise large parts of the compilers that were not covered by existing test suites. To test this, we enabled code coverage monitoring in GCC and LLVM. We then used each compiler to build its own test suite, and also to build its test suite plus 10,000 Csmith-generated programs. Table 3 shows that the incremental coverage due to Csmith is so small as to be a negative result. Our best guess is that these metrics are too shallow to capture Csmith’s effects, and that we would generate useful additional coverage in terms of deeper metrics such as path or value coverage.

XML feed