Lambda Calculus

Pure Subtype Systems

Pure Subtype Systems, by DeLesley S. Hutchins:

This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the theory is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined directly over objects. The traditional typing relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality.

Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several problems of practical interest, including mixin modules, virtual classes, and feature-oriented programming.

The cost of using pure subtype systems is the complexity of the meta-theory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is “close enough” to rule out obvious counter-examples. We present it as an open problem in type theory.

A thought-provoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the side-effect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types".

Of course, it's not all sunshine and roses. As the abstract explains, the metatheory is quite complicated and soundness is still an open question. Not too surprising considering type checking Type:Type is undecidable.

Hutchins' thesis is also available for a more thorough treatment. This work is all in pursuit of Hitchens' goal of feature-oriented programming.

Dependently-Typed Metaprogramming (in Agda)

Conor McBride gave an 8-lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory:

Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedently-typed programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the “pattern functors” which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of “container”. We shall expose the double-life containers lead as “interaction structures” describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductive-recursive definitions, and we use that power to build universes of dependent types.

The lecture notes, code, and video captures are available online.

As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging.

Mechanized λ<sub>JS</sub>

Mechanized λJS
The Brown PLT Blog, 2012-06-04

In an earlier post, we introduced λJS, our operational semantics for JavaScript. Unlike many other operational semantics, λJS is no toy, but strives to correctly model JavaScript's messy details. To validate these claims, we test λJS with randomly generated tests and with portions of the Mozilla JavaScript test suite.

Testing is not enough. Despite our work, other researchers found a missing case in λJS. Today, we're introducing Mechanized λJS, which comes with a machine-checked proof of correctness, using the Coq proof assistant.

More work on mechanizing the actual, implemented semantics of a real language, rather than a toy.

Milawa on Jitawa: a Verified Theorem Prover

Milawa

Aug 2010 - May 2011. Magnus Myreen has developed a verified Lisp system, named Jitawa, which can run Milawa. Our paper about this project was accepted to ITP 2011.

This is pretty interesting: Milawa was already "self-verifying," in the sense explained on the page. More recently, it's been made to run on a verified Lisp runtime, so that means the entire stack down to the X86_64 machine code is verified. Milawa itself is "ACL2-like," so it's not as interesting logically as, say, Isabelle or Coq, but it's far from a toy. Also, the Jitawa formalization apparently took place in HOL4, so you need to trust HOL4. Since HOL4 is an "LCF-like" system, you can do that to the extent that you trust the LCF process, but it doesn't satisfy the de Bruijn criterion in the same way Milawa or Coq do. Nevertheless, this seems like an important step toward the ultimate goal of having a stack that is verified "all the way down," as it were.

Vellvm: Formalizing the LLVM Intermediate Representation for Verified Program Transformations

Vellvm: Formalizing the LLVM Intermediate Representation for Verified Program Transformations by Jianzhou Zhao, Santosh Nagarakatte, Milo M. K. Martin, and Steve Zdancewic, POPL 2012

This paper presents Vellvm (verified LLVM), a framework for reasoning about programs expressed in LLVM's intermediate representation and transformations that operate on it. Vellvm provides a mechanized formal semantics of LLVM's intermediate representation, its type system, and properties of its SSA form. The framework is built using the Coq interactive theorem prover. It includes multiple operational semantics and proves relations among them to facilitate different reasoning styles and proof techniques.

To validate Vellvm's design, we extract an interpreter from the Coq formal semantics that can execute programs from LLVM test suite and thus be compared against LLVM reference implementations. To demonstrate Vellvm's practicality, we formalize and verify a previously proposed transformation that hardens C programs against spatial memory safety violations. Vellvm's tools allow us to extract a new, verified implementation of the transformation pass that plugs into the real LLVM infrastructure; its performance is competitive with the non-verified, ad-hoc original.

This obviously represents huge progress in marrying the theoretical benefits of tools like Coq with the practical benefits of tools like LLVM. We can only hope that this spurs further development in practical certified software development.

Deca, an LtU-friendly bare metal systems programming language

The Deca programming language is "a language designed to provide the advanced features of sophisticated, high-level programming languages while still programming as close as possible to the bare metal. It brings in the functional, object-oriented, and generic programming paradigms without requiring a garbage collector or a threading system, so programmers really only pay in performance for the features they use." The latter link provides a list of features that Deca does, will, and won't provide. Features provided include type inference, universally- and existentially- quantified types, and "a strong region-and-effect system that prohibits unsafe escaping pointers and double-free errors".

The Deca language and ideas behind it are documented in a thesis, The design and implementation of a modern systems programming language (PDF):

Low-level systems programming has remained one of the most consistently difficult tasks in software engineering, since systems programmers must routinely deal with details that programming-language and systems researchers have preferred to abstract away. At least partially, the difficulty arises from not applying the state of the art in programming-languages research to systems programming. I therefore describe the design and implementation of Deca, a systems language based on modern PL principles. Deca makes use of decades in programming-languages research, particularly drawing from the state of the art in functional programming, type systems, extensible data-types and subroutines, modularity, and systems programming-languages research. I describe Deca's feature-set, examine the relevant literature, explain design decisions, and give some of the implementation details for Deca language features. I have been writing a compiler for Deca to translate it into machine code, and I describe the overall architecture of this compiler and some of its details.

The source code for the Deca compiler, decac, is available here. The compiler is implemented in Scala and generates LLVM bytecode. (The author points out in the comments below that this implementation is a work in progress.)

The author of Deca is LtU member Eli Gottlieb, who back in 2008 posted in the forum asking for feedback on his language: Practical Bits of Making a Compiler for a New Language.

There's some more discussion of Deca over at Hacker News.

The Experimental Effectiveness of Mathematical Proof

The Experimental Effectiveness of Mathematical Proof

The aim of this paper is twofold. First, it is an attempt to give an answer to the famous essay of Eugene Wigner about the unreasonable effectiveness of mathematics in the natural sciences [25]. We will argue that mathematics are not only reasonably effective, but that they are also objectively effective in a sense that can be given a precise meaning. For that—and this is the second aim of this paper—we shall reconsider some aspects of Popper’s epistemology [23] in the light of recent advances of proof theory [8, 20], in order to clarify the interaction between pure mathematical reasoning (in the sense of a formal system) and the use of empirical hypotheses (in the sense of the natural sciences).

The technical contribution of this paper is the proof-theoretic analysis of the problem (already evoked in [23]) of the experimental modus tollens, that deals with the combination of a formal proof of the implication U ⇒ V with an experimental falsification of V to get an experimental falsification of U in the case where the formulæ U and V express empirical theories in a sense close to Popper’s. We propose a practical solution to this problem based on Krivine’s theory of classical realizability [20], and describe a simple procedure to extract from a formal proof of U ⇒ V (formalized in classical second-order arithmetic) and a falsifying instance of V a computer program that performs a finite sequence of tests on the empirical theory U until it finds (in finite time) a falsifying instance of U.

I thought I had already posted this, but apparently not.

Consider this paper the main gauntlet thrown down to those who insist that mathematical logic, the Curry-Howard Isomorphism, etc. might be fine for "algorithmic code" (as if there were any other kind) but is somehow inapplicable the moment a system interacts with the "real" or "outside" world (as if software weren't real).

Update: the author is Alexandre Miquel, and the citation is "Chapitre du livre Anachronismes logiques, à paraître dans la collection Logique, Langage, Sciences, Philosophie, aux Publications de la Sorbonne. Éd.: Myriam Quatrini et Samuel Tronçon, 2010."

Levy: a Toy Call-by-Push-Value Language

Andrej Bauer's blog contains the PL Zoo project. In particular, the Levy language, a toy implementation of Paul Levy's CBPV in OCaml.

If you're curious about CBPV, this implementation might be a nice accompaniment to the book, or simply a hands on way to check it out.

It looks like an implementation of CBPV without sum and product types, with complex values, and without effects. I guess a more hands-on way to get to grips with CBPV would be to implement any of these missing features.

The posts are are 3 years old, but I've only just noticed them. The PL Zoo project was briefly mentioned here.

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa is a "self-verifying" theorem prover for an ACL2-like logic.

We begin with a simple proof checker, call it A, which is short enough to verify by the "social process" of mathematics.

We then develop a series of increasingly powerful proof checkers, call them B, C, D, and so on. We show that each of these is sound: they accept only the same formulas as A. We use A to verify B, and B to verify C, and so on. Then, since we trust A, and A says B is sound, we can trust B, and so on for C, D, and the rest.

Our final proof checker is really a theorem prover; it can carry out a goal-directed proof search using assumptions, calculation, rewrite rules, and so on. We use this theorem prover to discover the proofs of soundness for B, C, and so on, and to emit these proofs in a format that A can check. Hence, "self verifying."

This might help inform discussions of the relationship between the de Bruijn criterion (the "social process" of mathematics) and formal verification. I think it also serves as an interesting signpost on the road forward: it's one thing to say that starting with a de Bruijn core and evolving a more powerful prover is possible in principle; it's another thing for it to actually have been done. The author's thesis defense slides provide a nice, quick overview.

A Lambda Calculus for Real Analysis

A Lambda Calculus for Real Analysis

Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.

This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.

In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.

Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.

Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.

Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.

XML feed