Lifted inference: normalizing loops by evaluation. Oleg Kiselyov and Chung-chieh Shan. 2009 Workshop on Normalization by Evaluation.
Many loops in probabilistic inference map almost every individual in their domain to the same result. Running such
loops symbolically takes time sublinear in the domain size. Using normalization by evaluation with first-class delimited continuations, we lift inference procedures to reap this speed-up without interpretive overhead. To express nested loops, we use multiple control delimiters for metacircular interpretation. To express loops over a powerset domain, we convert nested loops over a subset to unnested loops.
The paper is a bit hard to follow, but there are enough little tricks here to merit attentive reading. Or better yet, read the code.
The basic PLT idea might be summed as doing abstract interpretation on a shallowly embedded DSL using delimited continuations.
The slides from Tim Sweeney's High Performance Graphics 2009 keynote expand upon his Next Mainstream Programming Language talk we discussed some year back. To summarse, Tim makes the following points (in my interpretation):
- The current GPU pipeline is too limiting and too difficult to program
- The CPU roadmap from Intel and others show a convergence between the CPU and GPU. We're going to see a lot of cores with vector units hanging off a shared cache. The cores are getting simpler but programming them is getting harder
- Programs that are purely functional can be automatically vectorised, and the new CPUs are going to extend the scope of vectorisation to general programs
- Economic reality demands tools to make programming this hardware easier. New languages seem the only viable way forward
It's an interesting talk with a bit more detail at the bit-bashing end than previous talk. The question is: who is going to make this language?
The problem of void calls, or null pointer dereferencing, plagues programs written in any language using pointers or references with a "null" or "void" value. Tony Hoare recently described it as his "one-billion dollar mistake". The problem is very simple to state: in the typical object-oriented call
x, a reference, should normally denote an object but can be void, in which case the call will fail and produce an exception, often leading to a crash. This is one of the main sources of instability in today's software.
The void-safety mechanism of Eiffel, as defined in the ISO/ECMA standard (2006), is now fully implemented in EiffelStudio and guarantees the absence of void calls. With version 6.4, beyond the mechanism itself, all libraries have been updated to be void safe.
With Alexander Kogtenkov and Emmanuel Stapf I wrote an article describing not only the principles but also the delicate engineering issues that we have encountered and tried to address in making Eiffel void-safe. An abstract, and a link to the PDF of the article, can be found at http://bertrandmeyer.com/tag/void-safety/.
Hoopl: Dataflow Optimization Made Simple by Norman Ramsey, João Dias, and Simon Peyton Jones.
We present Hoopl, a Haskell library that makes it easy for compiler writers to implement program transformations based on dataflow analyses. The compiler writer must identify (a) logical assertions on which the transformation will be based; (b) a representation of such assertions, which should form a lattice of finite height; (c) transfer functions that approximate weakest preconditions or strongest postconditions over the assertions; and (d) rewrite functions whose soundness is justified by the assertions. Hoopl uses the algorithm of Lerner, Grove, and Chambers (2002), which can compose very simple analyses and transformations in a way that achieves the same precision as complex, handwritten ``super-analyses.'' Hoopl will be the workhorse of a new back end for the Glasgow Haskell Compiler (version 6.12, forthcoming).
A continuation of work previously mentioned on LtU here. Original people, new language. More purity, more goodness. Sinfully, there does not yet seem to be a hackage package.
A Veriï¬ed Compiler for an Impure Functional Language
We present a veriï¬ed compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to big-step operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuation-passing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq’s tactic language, making it possible to reuse proofs unchanged as new language features are added.
In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler veriï¬cation projects, with much more effort expended in the statement and proof of binder-related lemmas than is found in standard pencil-and-paper proofs. We show how to exploit the representation technique of parametric higher-order abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencil-and-paper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general-purpose type theories like Coq’s logic.
The latest from Adam Chlipala. Yet another evolutionary step for Lambda Tamer. Between this and Ynot the Coq/certified compiler story seems to be getting more impressive nearly daily.
Effective Interactive Proofs for Higher-Order Imperative Programs
We present a new approach for constructing and verifying higher-order, imperative programs using the Coq proof assistant. We build on the past work on the Ynot system, which is based on Hoare Type Theory. That original system was a proof of concept, where every program veriï¬cation was accomplished via laborious manual proofs, with much code devoted to uninteresting low-level details. In this paper, we present a re-implementation of Ynot which makes it possible to implement fully-veriï¬ed, higher-order imperative programs with reasonable proof burden. At the same time, our new system is implemented entirely in Coq source ï¬les, showcasing the versatility of that proof assistant as a platform for research on language design and veriï¬cation.
Both versions of the system have been evaluated with case studies in the veriï¬cation of imperative data structures, such as hash tables with higher-order iterators. The veriï¬cation burden in our new system is reduced by at least an order of magnitude compared to the old system, by replacing manual proof with automation. The core of the automation is a simpliï¬cation procedure for implications in higher-order separation logic, with hooks that allow programmers to add domain-speciï¬c simpliï¬cation rules.
We argue for the effectiveness of our infrastructure by verifying a number of data structures and a packrat parser, and we compare to similar efforts within other projects. Compared to competing approaches to data structure veriï¬cation, our system includes much less code that must be trusted; namely, about a hundred lines of Coq code deï¬ning a program logic. All of our theorems and decision procedures have or build machine-checkable correctness proofs from ï¬rst principles, removing opportunities for tool bugs to create faulty veriï¬cations.
Adam Chlipala has been telling us how underutilized Coq's Ltac tactic programming language for proof automation is for years. Here is the... er... proof.
Certiï¬ed Web Services in Ynot
In this paper we demonstrate that it is possible to implement certiï¬ed web systems in a way not much different from writing Standard ML or Haskell code, including use of imperative features like pointers, ï¬les, and socket I/O. We present a web-based course gradebook application developed with Ynot, a Coq library for certiï¬ed imperative programming. We add a dialog-based I/O system to Ynot, and we extend Ynot’s underlying Hoare logic with event traces to reason about I/O behavior. Expressive abstractions allow the modular certiï¬cation of both high level speciï¬cations like privacy guarantees and low level properties like memory safety and correct parsing.
Ynot, always ambitious, takes another serious swing: extracting a real web application from a proof development. In some respects the big news here is the additional coverage that Ynot now offers in terms of support for file and socket I/O, and the event trace mechanism. But there's even bigger news, IMHO, which is the subject of another paper that warrants a separate post.
In Observational Equality, Now! Thorsten Altenkirch, Conor McBride, and Wouter Swierstra have
something new and positive to say about propositional equality in programming and proof systems based on the Curry-Howard correspondence between propositions and types. We have found a way to present a propositional equality type
- which is substitutive, allowing us to reason by replacing equal for equal in propositions;
- which reflects the observable behaviour of values rather than their construction: in particular, we have extensionality — functions are equal if they take equal inputs to equal outputs;
- which retains strong normalisation, decidable typechecking and canonicity — the property that closed normal forms inhabiting datatypes have canonical constructors;
- which allows inductive data structures to be expressed in terms of a standard characterisation of well-founded trees;
- which is presented syntactically — you can implement it directly, and we are doing so—this approach stands at the core of Epigram 2;
- which you can play with now: we have simulated our system by a shallow embedding in Agda 2, shipping as part of the standard examples package for that system [21]. Until now, it has always been necessary to sacrifice some of these aspects. The closest attempt in the literature is Al- tenkirch’s construction of a setoid-model for a system with canon- icity and extensionality on top of an intensional type theory with proof-irrelevant propositions [4]. Our new proposal simplifies Altenkirch’s construction by adopting McBride’s heterogeneous ap- proach to equality.
In Objects as Modules in Newspeak, Gilad Bracha et al. describe a way to avoid the coupling in inherent constructs found in many OO languages such as a global namespace, "static" stateful variables, globally accessible object constructors, etc.
We describe support for modularity in Newspeak, a new programming language descended from Smalltalk and Self. Like Self, all computation — even an object’s own access to its internal structure — is performed by invoking methods on objects. However, like Smalltalk, Newspeak is class-based. Classes can be nested arbitrarily, as in Beta. Since all names denote method invocations, all classes are virtual; in particular, superclasses are virtual, so all classes act as mixins. Unlike its predecessors, there is no static state in Newspeak, nor is there a global namespace. Top level classes act as module deï¬nitions, which are independent, immutable, self-contained parametric namespaces. They can be instantiated into modules which may be stateful and mutually recursive. Naturally, like its predecessors, Newspeak is reflective: a mirror library allows structured access to the program meta-level.
There's a lot in here that should be of interest to LtUers interested in object capability based security.
|
Recent comments
36 weeks 2 days ago
36 weeks 2 days ago
36 weeks 2 days ago
1 year 6 weeks ago
1 year 10 weeks ago
1 year 12 weeks ago
1 year 12 weeks ago
1 year 14 weeks ago
1 year 19 weeks ago
1 year 19 weeks ago