Logic/Declarative

Extensible Effects -- An Alternative to Monad Transformers

Extensible Effects -- An Alternative to Monad Transformers, by Oleg Kiselyov, Amr Sabry and Cameron Swords:

We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers.

Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types.

A follow-up to Oleg's delimited continuation adaptation of Cartwright and Felleisen's work on Extensible Denotational Language Specifications, which is a promising alternative means of composing effects to the standard monad transformers.

This work embeds a user-extensible effect EDSL in Haskell by encoding all effects into a single effect monad using a novel open union type and the continuation monad. The encoding is very similar to recent work on Algebraic Effects and Handlers, and closely resembles a typed client-server interaction ala coroutines. This seems like a nice convergence of the topics covered in the algebraic effects thread and other recent work on effects, and it's more efficient than monad transformers to boot.

Visi.io

Visi.io comes from David Pollak and aims at revolutionizing building tablet apps, but the main attraction now seems to be in exploring the way data flow and cloud computing can be integrated. The screencast is somewhat underwhelming but at least convinces me that there is a working prototype (I haven't looked further than the website as yet). The vision document has some nice ideas. Visi.io came up recently in the discussion of the future of spreadsheets.

How to Make Ad Hoc Proof Automation Less Ad Hoc

How to Make Ad Hoc Proof Automation Less Ad Hoc
Georges Gonthier, Beta Ziliani, Aleksandar Nanevski, and Derek Dreyer, to appear in ICFP 2011

Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover’s base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself.

We propose a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coq’s own type system. Our approach involves a sophisticated application of Coq’s canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependently-typed logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. We present a series of design patterns for canonical structure programming that enable one to carefully and predictably coax Coq’s type inference engine into triggering the execution of user-supplied algorithms during unification, and we illustrate these patterns through several realistic examples drawn from Hoare Type Theory. We assume no prior knowledge of Coq and describe the relevant aspects of Coq type inference from first principles.

If you've ever toyed with Coq but run into the difficulties that many encounter in trying to construct robust, comprehensible proof scripts using tactics, which manipulate the proof state and can leave you with the "ground" of the proof rather than the "figure," if you will, in addition to being fragile in the face of change, you may wish to give this a read. It frankly never would have occurred to me to try to turn Ltac scripts into lemmas at all. This is much more appealing than most other approaches to the subject I've seen.

Interactive Tutorial of the Sequent Calculus

Interactive Tutorial of the Sequent Calculus by Edward Z. Yang.

This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic. ...

Proving theorems is not for the mathematicians anymore: with theorem provers, it's now a job for the hacker. — Martin Rinard ...

A common complaint with a formal systems like the sequent calculus is the "I clicked around and managed to prove this, but I'm not really sure what happened!" This is what Martin means by the hacker mentality: it is now possible for people to prove things, even when they don't know what they're doing. The computer will ensure that, in the end, they will have gotten it right.

The tool behind this nice tutorial is Logitext.

Milawa on Jitawa: a Verified Theorem Prover

Milawa

Aug 2010 - May 2011. Magnus Myreen has developed a verified Lisp system, named Jitawa, which can run Milawa. Our paper about this project was accepted to ITP 2011.

This is pretty interesting: Milawa was already "self-verifying," in the sense explained on the page. More recently, it's been made to run on a verified Lisp runtime, so that means the entire stack down to the X86_64 machine code is verified. Milawa itself is "ACL2-like," so it's not as interesting logically as, say, Isabelle or Coq, but it's far from a toy. Also, the Jitawa formalization apparently took place in HOL4, so you need to trust HOL4. Since HOL4 is an "LCF-like" system, you can do that to the extent that you trust the LCF process, but it doesn't satisfy the de Bruijn criterion in the same way Milawa or Coq do. Nevertheless, this seems like an important step toward the ultimate goal of having a stack that is verified "all the way down," as it were.

Beyond pure Prolog: Power and danger

One of the sections of Oleg Kiselyov's Prolog and Logic Programming page, on Beyond pure Prolog: power and danger, points out (i) term introspection (in the guise of the var/1 predicate) can be derived from three of Prolog's imperative features, two of which are quite mild-looking, and (ii) this introspection potentially makes Prolog code hard to understand.

Oleg pointed this note in response to my defence of cut; it is short, sweet, and well-argued.

The Experimental Effectiveness of Mathematical Proof

The Experimental Effectiveness of Mathematical Proof

The aim of this paper is twofold. First, it is an attempt to give an answer to the famous essay of Eugene Wigner about the unreasonable effectiveness of mathematics in the natural sciences [25]. We will argue that mathematics are not only reasonably effective, but that they are also objectively effective in a sense that can be given a precise meaning. For that—and this is the second aim of this paper—we shall reconsider some aspects of Popper’s epistemology [23] in the light of recent advances of proof theory [8, 20], in order to clarify the interaction between pure mathematical reasoning (in the sense of a formal system) and the use of empirical hypotheses (in the sense of the natural sciences).

The technical contribution of this paper is the proof-theoretic analysis of the problem (already evoked in [23]) of the experimental modus tollens, that deals with the combination of a formal proof of the implication U ⇒ V with an experimental falsification of V to get an experimental falsification of U in the case where the formulæ U and V express empirical theories in a sense close to Popper’s. We propose a practical solution to this problem based on Krivine’s theory of classical realizability [20], and describe a simple procedure to extract from a formal proof of U ⇒ V (formalized in classical second-order arithmetic) and a falsifying instance of V a computer program that performs a finite sequence of tests on the empirical theory U until it finds (in finite time) a falsifying instance of U.

I thought I had already posted this, but apparently not.

Consider this paper the main gauntlet thrown down to those who insist that mathematical logic, the Curry-Howard Isomorphism, etc. might be fine for "algorithmic code" (as if there were any other kind) but is somehow inapplicable the moment a system interacts with the "real" or "outside" world (as if software weren't real).

Update: the author is Alexandre Miquel, and the citation is "Chapitre du livre Anachronismes logiques, à paraître dans la collection Logique, Langage, Sciences, Philosophie, aux Publications de la Sorbonne. Éd.: Myriam Quatrini et Samuel Tronçon, 2010."

Concurrent Pattern Calculus

Concurrent Pattern Calculus by Thomas Given-Wilson, Daniele Gorla, and Barry Jay:

Concurrent pattern calculus drives interaction between processes by comparing data structures, just as sequential pattern calculus drives computation. By generalising from pattern matching to pattern unification , interaction becomes symmetrical, with information flowing in both directions. This provides a natural language for describing any form of exchange or trade. Many popular process calculi can be encoded in concurrent pattern calculi.

Barry Jay's Pattern Calculus has been discussed a few times here before. I've always been impressed with the pattern calculus' expressive power for computing over arbitrary structure. The pattern calculus supports new forms of polymorphism, which he termed "path polymorphism" and "pattern polymorphism", which are difficult to provide in other calculi. The closest I can think of would be a compiler-provided generalized fold over any user-defined structure.

This work extends the pattern calculus to the concurrent setting by adding constructs for parallel composition, name restriction and replication, and argues convincingly for its greater expressiveness as compared to other concurrent calculi. He addresses some of the obvious concerns for symmetric information flow of the unification operation.

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa is a "self-verifying" theorem prover for an ACL2-like logic.

We begin with a simple proof checker, call it A, which is short enough to verify by the "social process" of mathematics.

We then develop a series of increasingly powerful proof checkers, call them B, C, D, and so on. We show that each of these is sound: they accept only the same formulas as A. We use A to verify B, and B to verify C, and so on. Then, since we trust A, and A says B is sound, we can trust B, and so on for C, D, and the rest.

Our final proof checker is really a theorem prover; it can carry out a goal-directed proof search using assumptions, calculation, rewrite rules, and so on. We use this theorem prover to discover the proofs of soundness for B, C, and so on, and to emit these proofs in a format that A can check. Hence, "self verifying."

This might help inform discussions of the relationship between the de Bruijn criterion (the "social process" of mathematics) and formal verification. I think it also serves as an interesting signpost on the road forward: it's one thing to say that starting with a de Bruijn core and evolving a more powerful prover is possible in principle; it's another thing for it to actually have been done. The author's thesis defense slides provide a nice, quick overview.

A Lambda Calculus for Real Analysis

A Lambda Calculus for Real Analysis

Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.

This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.

In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.

Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.

Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.

Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.

XML feed