Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit. Makarius Wenzel, UITP 2010.
After several decades, most proof assistants are still centered around TTY-based interaction in a tight read-eval-print loop. Even well-known Emacs modes for such provers follow this synchronous model based on single commands with immediate response, meaning that the editor waits for the prover after each command. There have been some attempts to re-implement prover interfaces in big IDE frameworks, while keeping the old interaction model. Can we do better than that?
Ten years ago, the Isabelle/Isar proof language already emphasized the idea of proof document (structured text) instead of proof script (sequence of commands), although the implementation was still emulating TTY interaction in order to be able to work with the then emerging Proof General interface. After some recent reworking of Isabelle internals, to support parallel processing of theories and proofs, the original idea of structured document processing has surfaced again.
Isabelle versions from 2009 or later already provide some support for interactive proof documents with asynchronous checking, which awaits to be connected to a suitable editor framework or full-scale IDE. The remaining problem is how to do that systematically, without having to specify and implement complex protocols for prover interaction.
This is the point where we introduce the new Isabelle/Scala layer, which is meant to expose certain aspects of Isabelle/ML to the outside world. The Scala language (by Martin Odersky) is sufficiently close to ML in order to model well-known prover concepts conveniently, but Scala also runs on the JVM and can access existing Java libraries directly. By building more and more external system wrapping for Isabelle in Scala, we eventually reach the point where we can integrate the prover seamlessly into existing IDEs (say Netbeans).
To avoid getting side-tracked by IDE platform complexity, our current experiments are focused on jEdit, which is a powerful editor framework written in Java that can be easily extended by plugin modules. Our plugins are written again in Scala for our convenience, and to leverage the Scala actor library for parallel and interactive programming. Thanks to the Isabelle/Scala layer, the Isabelle/jEdit implementation is very small and simple.
I thought this was a nice paper on the pragmatics of incremental, interactive proof editing. I've suspected for a while that as programming languages and IDEs grow more sophisticated and do more computationally-intensive checks at compile time (including but not limited to theorem proving), it will become similarly important to design our languages to support modular and incremental analysis.
However, IDE designs also need more experimentation, and unfortunately the choice of IDEs to extend seem to be limited to archaic systems like Emacs or industrial behemoths like Eclipse or Visual Studio, both of which constrain the scope for new design -- Emacs is too limited, and the API surface of Eclipse/VS is just too big and irregular. (Agda-mode for Emacs is a heroic but somewhat terrifying piece of elisp.)
The first draft of the R7RS small language standard is now available at:
http://trac.sacrideo.us/wg/attachment/wiki/WikiStart/r7rs-draft-1.pdf
This is a relatively small revision to the R5RS, adding many of the most frequently requested features but keeping the overall structure of the report the same, with a current count of 67 pages. In the spirit of the older reports, many situations are left as errors or unspecified, leaving room for implementations to experiment with their own extensions. The language is still small enough to provide a very compact implementation of, and the current development branch of Chibi-Scheme provides all of the functionality of the draft (in some cases with different names or modules, pending finalization of the standard).
Read the full announcement at comp.lang.scheme
The good news is that Jeremy Gibbons is writing a book on Patterns in Functional Programming. Even better news is that he is blogging about it as he goes along!
Those unfamiliar with the topic may want to begin at the beginning, though I personally just rummaged around. Some may enjoy going to the papers rather than the blog, or even better to the LtU discussions about many of them. Alternatively, I think it might be a great opportunity to ask Jeremy questions using the comments on his blog. I am sure this can be a very productive learning experience, and will surely help the book!
Imperative Programs as Proofs via Game Semantics, Martin Churchill, James Laird and Guy McCusker. To appear at LICS 2011.
Game semantics extends the Curry-Howard isomorphism to a three-way correspondence: proofs, programs, strategies. But the universe of strategies goes beyond intuitionistic logics and lambda calculus, to capture stateful programs. In this paper we describe a logical counterpart to this extension, in which proofs denote such strategies. We can embed intuitionistic ï¬rst-order linear logic into this system, as well as an imperative total programming language. The logic makes explicit use of the fact that in the game semantics the exponential can be expressed as a ï¬nal coalgebra. We establish a full completeness theorem for our logic, showing that every bounded strategy is the denotation of a proof.
This paper increases the importance of gaining a more-than-casual understanding of game semantics for me, since it combines two of my favorite things: polarized type theories and imperative higher-order programs.
Interesting paper I found on usability and PL, abstract:
While still a relatively young ï¬eld, computer science has a vast body of knowledge in the domain of programming languages. When a new language is introduced, its designers make claims which distinguish their language from previous languages. However, it often feels like language designers do not feel a pressing need to back these claims with evidence beyond personal anecdotes. Peer reviewers are likely to agree.
In this paper, we present preliminary work which revisits the history of such claims by examining a number of language design papers which span the history of programming language development. We focus on the issue of claim-evidence correspondence, or determining how often claims are or are not backed by evidence. These preliminary results conï¬rm that unsupported claims have been around since the inception of higher level programming in the 1950s. We stake a position that this behavior is unacceptable for the health of the research community. We should be more aware of valiant and effective efforts for supplying evidence to support language design claims.
I found this paper because I've been trying to answer the following question: is user testing performed on PL designs beyond the end-user kind. If yes, how is it done? If no, are we just lazy, or is user testing fundamentally inappropriate for PL design given the learning curves involved?
Kona is a new open-source implementation of Arthur Whitney's K, an ASCII-based APL like language. Kona is a fully working version of K3.
If you haven't ever tried APL/J/K or ilk you might find this language incomprehensible at first -- unless you like a challenge! Watch the screencasts or read some of our earlier APL/J stories.
Regardless of your interest in K, any LtUer worth his salt will enjoy the source code. We wrote a bit about the history of the remarkable C coding style used in the past, but I can't locate the link at the moment.
In Finding and Understanding Bugs in C Compilers Xuejun Yang, Yang Chen, Eric Eide, and John Regehr of University of Utah, School of Computing describe Csmith, a fuzzer for testing C compilers. The hard part was avoiding undefined behavior.
Compilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to ï¬nd compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our ï¬rst contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undeï¬ned and unspeciï¬ed behaviors that would destroy its ability to automatically ï¬nd wrong code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers.
Two bits really stuck out for me. First, formal verification has a real positive impact
The striking thing about our CompCert results is that the middleend bugs we found in all other compilers are absent. As of early 2011, the under-development version of CompCert is the only compiler we have tested for which Csmith cannot ï¬nd wrong-code errors. This is not for lack of trying: we have devoted about six CPU-years to the task. The apparent unbreakability of CompCert supports a strong argument that developing compiler optimizations within a proof framework, where safety checks are explicit and machine-checked, has tangible beneï¬ts for compiler users.
And second, code coverage is inadequate for ensuring good test thoroughness for software as complex as a compiler.
Because we ï¬nd many bugs, we hypothesized that randomly generated programs exercise large parts of the compilers that were not covered by existing test suites. To test this, we enabled code coverage monitoring in GCC and LLVM. We then used each compiler to build its own test suite, and also to build its test suite plus 10,000 Csmith-generated programs. Table 3 shows that the incremental coverage due to Csmith is so small as to be a negative result. Our best guess is that these metrics are too shallow to capture Csmith’s effects, and that we would generate useful additional coverage in terms of deeper metrics such as path or value coverage.
Type-checking Modular Multiple Dispatch with Parametric Polymorphism and Multiple Inheritance. Eric Allen, Justin Hilburn, Scott Kilpatrick, Sukyoung Ryu, David Chase, Victor Luchangco, and Guy L. Steele Jr. Submitted for publication, December 2010.
Multiple dynamic dispatch poses many problems for a statically typed language with nominal subtyping and multiple inheritance. In particular, there is a tension between modularity and extensibility when trying to ensure at compile time that each function call dispatches to a unique most specific definition at run time. We have previously shown how requiring that each set of overloaded definitions forms a meet semi-lattice allows one to statically ensure the absence of ambiguous calls while maintaining modularity and extensibility.
In this paper, we extend the rules for ensuring safe overloaded functions to a type system with parametric polymorphism. We show that such an extension can be reduced to the problem of determining subtyping relationships between universal and existential types. We also ensure that syntactically distinct types inhabited by the same sets of values are equivalent under subtyping. Our system has been implemented as part of the open source Fortress compiler.
So it's Sunday, and I'm researching the interaction of multiple dispatch, type-checking, and parametric polymorphism, when Google spits out this new paper by the Fortress team.
Scott Kilpatrick's Master's Thesis Ad Hoc: Overloading and Language Design puts it into perspective:
The intricate concepts of ad-hoc polymorphism and overloading permeate the field of programming languages despite their somewhat nebulous definitions. With the perspective afforded by the state of the art, object-oriented Fortress programming language, this thesis presents a contemporary account of ad-hoc polymorphism and overloading in theory and in practice. Common language constructs are reinterpreted with a new emphasis on overloading as a key facility.
Furthermore, concrete problems with overloading in Fortress, encountered during the author's experience in the development of the language, are presented with an emphasis on the ad hoc nature of their solutions.
All of this is a bit over my head, unfortunately, but I find it very interesting nevertheless. And it's heartening that the somewhat neglected multiple dispatch (and multiple inheritance!) is further developed by such a high caliber team.
Functor is to Lens as Applicative is to Biplate: Introducing Multiplate is an interesting paper by Russell O'Connor which shows that certain types are isomorphic to quantification over certain type classes. This isomorphism then naturally leads to a generalization of Uniplate/Compos into Multiplate, which allows for rather generic traversals.
The BiplateType datatype defined in the Uniplate library is similar to a lens (a.k.a. a functional reference). This paper proves that Biplate generalizes lens similarly to how applicative functor generalizes functor. This paper gives an alternative definition of BiplateType using a nested data type that better captures the invariants for Biplates than Uniplate's original definition. In previous work, van Laarhoven conjectures an isomorphism between this nested data type and a data type found in Compos library. This paper complete his proof of the isomorphism by using free theorems for types polymorphic over applicative functors. This result means that, morally speaking, Compos and Uniplate use isomorphic representations for their core data type for generic traversals of simple recursive data types. For mutually recursive data types, Compos and Uniplate providing two different extensions to this common core functionality. Compos requires the user to rewrite a mutually recursive data type as a GADT, while Uniplate's Biplate class, is less powerful but only requires multiparameter type classes and does not require rewriting one's data type. This paper proposes a third extension to support mutually recursive datatypes that is as powerful as Compos, as easy to use as Biplate, and more portable than both of them. This proposal, called Multiplate, only requires rank 3 polymorphism.
(Full disclosure: Russell is a post-doc at McMaster, working with me and Bill Farmer. While trying to figure out how to make certain repetitive traversals of MathScheme ASTs into a generic framework, he 'stumbled upon' multiplate, and then kept pursuing the ideas, with the multiplate package and this nice bit of theory as the end result.)
Albert Gräf is the author of the Pure programming language. Pure is a functional programming language based on term rewriting. It has a syntax featuring curried function applications, lexical closures and equational definitions with pattern matching, and thus is somewhat similar to languages of the Haskell and ML variety. Pure is also a dynamic language, and is more like Lisp in this respect. The interpreter has an LLVM backend that does JIT compilation.
Part 1 and Part 2
|
Recent comments
22 weeks 3 days ago
22 weeks 3 days ago
22 weeks 3 days ago
44 weeks 4 days ago
48 weeks 6 days ago
50 weeks 4 days ago
50 weeks 4 days ago
1 year 1 week ago
1 year 5 weeks ago
1 year 5 weeks ago