Proofs as programs

Many software developers are turned away from verification techniques because providing proofs for algorithms seems something horrible and purely theoretical to them. A newcomer to a proof assistant like Coq or Isabelle needs at least two weeks to be able to prove some simple theorems. And it is difficult to realize that proving has something to do with programming.

In order to provide software verification for the masses a different approach is necessary.

If one gains more experience with proofs one realizes that writing a proof is not very different from writing a program. Currently I try to exploit the technique of writing a proof like a program. This led to the notion of proof procedures. With proof procedures there is no need to issue proof commands to some proof engine to manipulate the state of the engine. One just writes assertions (i.e. boolean expressions) in a step by step logical manner. A proof look like a procedure.

The article Proofs made easy with proof procedures describes the basic approach.

Comment viewing options

First- vs second-class proof

Interesting. As I understand it, this is creating a shadow structure that looks, superficially, rather like a procedure, but is actually something else (I'd tend to say, something second-class). While the approach I've had in mind (and will presumably get to someday if nobody else does it, and maybe even if someone does) is to use actual procedures to prove things. The most I seem to have written publicly on the concept was an LtU comment Type inference considered harmful.

Natural deduction versus Hilbert style proofs

I don't understand what you mean by "second-class" in this context. If you look at it from a proof theoretical point of view I have just changed from "natural deduction" to "Hilbert style proofs". It can be proved that natural deduction and Hilbert style are logically equivalent.

The proof assistants Isabelle and Coq work with natural deduction. Hilbert style proof just give a sequence of assertions with the side condition that each asssertion must be a consequence of some previous assertions by applying modus ponens only. In its pure form Hilbert style proofs are rather impractical (i.e. difficult to master). I have just added the deduction theorem i.e. (C,a |- b) => (C |- a=>b) in order to make the proofs easier.

First- and second-class-ness

First- and second-class-ness become treacherous notions when one tries to consider both static and dynamic PLs at the same time. Since you're operating in the static world, and I spend most of my time in the dynamic world, I really should know to tread cautiously.

To my dynamically oriented thinking, any object that exists only at compile time is second-class. In this view, static types are second-class objects (just as traditional macros are); and your procedure-like proofs are then second-class.

What I envision —roughly— is that propositions and theorems would be first-class objects, things that exist at runtime. A set of standard procedures are provided for constructing them at run-time. The standard constructors for theorems correspond to axioms, and they are chosen to guarantee that a theorem cannot be constructed unless it's true. There might be, for example, a constructor called modus-ponens that takes two theorems, A and A implies B, and constructs a theorem B. The programmer uses the same computational model to write proofs as they do to write the programs that the proofs are about.

To be very clear, I don't mean to denigrate your approach. It's not in the style I prefer, and I find interesting the similarities and differences to my more dynamic approach to an analogous task.

Can be the same structure

If semantics don't defer all failure that happens under lambdas to run-time. For a contrived example, if you have a program that looks like this:

def foo(n:Nat)
assert n != 5 

Then in a typical language you probably expect that program to work unless foo is called with n == 5. But it's also possible to imagine a language in which that's statically an error, since the assumption n:Nat doesn't preclude n == 5. Dependently typed languages/logics usually work this way.

The connection with

The connection with dependent types is a nice point.

RE: Your idea for using actual procedures to prove things

BTW, are you sure your formulation of this idea is consistent? It sounds like the line of thinking that lead Curry to encounter Curry's paradox. Basically, you can get into trouble if you allow proving "X -> Y" by proving Y when assuming X even if X is a non-sense "proposition" that might not even terminate.

Paradoxes

I have invented nothing new. All the reasoning is based on proof theory. I have just put it into a syntax which might be more attractive to programmers than natural deduction (which is basically typed lambda calculus according to the curry howard isomorphism).

All proof theories consider x=>y to be proved if you are able to prove y assuming x. This is the semantics of an implication. If I have x then I can conclude y. If I don't have x, then I cannot conclude anything. The validity of x=>y says nothing about the validity of x. If you cannot prove x, then x=>y does not help you. It is just hypothetical proposition (i.e. an implication).

I don't know what you mean by a "non terminating proposition". Probably a proposition which is not provable. Propositions are not calculations. Therefore the term "termination" does not make sense to me in this context. If you can prove a proposition then you have a valid assertion. If you cannot prove it then you do not (yet) have a valid assertion.

Sorry

I was replying to John. It probably wasn't even a useful comment directed at him, though, since I think he intends only to compute proofs, not treat programs as propositions. Sorry for the confusion.

It's been a few years since

It's been a few years since I last revisited Curry's paradox, but as best I can recall, you've hit the nail on the head: the paradox hinges on treating arbitrary programs as propositions, so a distinct type for propositions should be safe.

Recursive types

It seems to me that Curry's Paradox (as described in the earlier Wikipedia reference) and this talk about non-terminating propositions makes reference to recursive types rather than non-terminating computations encoded in dependent types per se.

When you say that propositions are not calculations (etc), perhaps you're emphasizing again that the Computer Scientists among us should stay within the simply-typed lambda calculus?

Computations and propositions (assertions)

I am not sure whether I've got your point. Let me make some statements about how I see the distinction between assertions (propositions) and computations.

Computations must be computable. This is a requirement for any computation. I.e. it must terminate and satisfy its specification.

In assertions I can use non computable functions. I.e. I can state that all objects of a certain type satisfy a criterion. This expression is not computable because it states something about a possibly infinte collection of objects. I.e. if we tried to calculate the truth value of such an expression we might enter an infinite loop. But we don't try to compute assertions, we have to prove them.

I.e. in assertions I can use non-computable functions. But there are restrictions. The functions used within such a proposition must be mathematically meaningful. I.e. they must return a unique result and the result must exist. The only relaxation is that the result need not be computable. I.e. ghost functions are not completely arbitrary. But the relaxation of the requirement of computability makes them much more expressive and useful to formulate specifications.

Recursive Types don't Equal Computable...

...in a strict sense. Hence the confusion. There are outright ordinary recursive types which are computable -most coalgebraic types,- and there are recursive types which specify infinite uncomputable structures, but of which you can -for example- take the first n elements.

Maybe it's my lack of understanding of the theory, but I don't see why recursive types equate to uncomputable functions.

Computability

If think we talk about different things. What do you mean by recursive types? I know of inductive types (called algebraic types as well) and coinductive types. Coinductive types (in the definition of coq) can specify infinite structures.

When I use the term "computable" I use it for functions and predicates (for predicates the term "decidable" is used as well). There are perfectly well defined functions which are not computable and there are perfectly well defined predicates which are not decidable. Such functions/predicates cannot be used in actual computations. E.g. look at

  if "there are seven seven's in a row in the decimal presentation of pi"
then
....
else
....
end


But in assertions such a predicate does not cause any problem.

Going to war with General Recursion

I don't see why recursive types equate to uncomputable functions.

You can use recursive types to assign a type to fixed-point combinators -- add in polymorphic types and you can use recursive types to make a general fixed-point combinator with the (logically) impossible type: (a -> a) -> a.

From a fixed-point combinator, you can easily encode divergence. It's in the link I mentioned above, or TaPL's section on recursive types.

Easiest to see with an example...

Here's a bit of Ocaml which illustrates:

(* This encodes Dana Scott's famous model of the untyped lambda
calculus: a type which is isomorphic to the type of functions
on it. *)

type scott = V of (scott -> scott)

(* Since this is a model of the untyped lambda calculus, we can
now code up the same constructions which give rise to looping
in the untyped lambda calculus. For example, here is the U
combinator. Running this expression at the Caml toplevel will
set off an infinite loop. *)

let u =
let self_apply = fun (V f) -> f (V f) in
self_apply (V self_apply)


The key thing that makes the looping possible is that the datatype declaration for scott has an occurrence of scott to the left of an arrow, in a contravariant position. Inductive and coinductive type declarations forbid this, to rule out examples like scott.

Brainless

It's all very interesting for small ADT definitions, which are easy, and doesn't carry over to most programs, which either spend their most time in business logic rules and, far worse, GUI handling. The latter being the most time consuming and error-prone type of programming.

Show it proves termination of Ackermann (simple function), Collatz (like, wow), and that GUI handling can be made error-safe, and you maybe have some non-academic toy application.

All programs and every business logic and GUI is the goal

For small ADTs it is not very interesting. The effort of verification is worthwhile only if it scales to all programs. Operating systems, complex gui handling, concurrency, ...

Currently I cannot see any limit of the approach. But there is no rapid avenue. The groundwork has to be laid step by step.

Brainless ?

I think that formal verification is mature enough to adress these problem in general.

Taking your remarks in order:

• For instance, Ackermann is an easy exercise in Coq: look
• Of course, formal verification does not help people to find proofs, and Collatz is out of the scope.
• I guess Ur/Web can be considered as a first step to make GUI handling error-safe

More generally, various groups have tackled examples that encompass a certified kernel (seL4), a verified compiler (CompCert), and non trivial math examples like the 4 colour theorem. And, the sky is the limit!

But, of course here, I do not really assess the merits of the system at hand, and how it compare with state of the art formal verification... Which is another question.

Prior art

I have to admit, I only skimmed your article, but the idea seems to be precisely the same as the means for proving lemmas within program source files, in tools like VeriFast and Dafny. In my opinion, this style is inappropriate for programmers who are capable of using tools like Coq.

And for people not capable of using coq?

You are right. Dafny gave me the idea to present proofs as proof procedures.If you look into my prior articles in particular Introduction to the proof engine you see a notation similar to coq, i.e. proof by using tactics.

For those experienced in proof techniques both methods are logically equivalent and equally powerful. It is just a different interface to the proof engine. I have included in my current definition of Modern Eiffel the proof procedures as a (maybe) more attractive user interface. And the more I use it the more I like it. It is a little bit more verbose, but it documents the proof idea better than tactics (or proof commands).

However I agree that it is a matter of taste. I will explore the notation with proof procedures more in the next articles. But it might be the case that I leave both possibilities in. It does not make sense to try to attract more people on one side and loose the people accustomed to tools like coq or isabelle on the other side.

C-zar

For comprehensiveness, Coq also includes a declarative proof mode known as Mathematical Proof Language or C-zar, in addition to its tactics-based facility. The choice between tactic-based scripts v. declarative proofs involves a number of tradeoffs: declarative proofs are generally less brittle than tactics-based scripts, as well as being easier to review and more intuitive to casual users.

On the other hand, experienced users may find that developing non-declarative proof scripts is less work overall; custom tactics can be developed to fit some specific needs, and brittleness problems can be prevented by only using simple scripts and relying on lemmata for organization.

This is my experience as

This is my experience as well. Proofs with proof commands (or tactics) are usually a little bit shorter. Declarative proofs have the advantage of better readability and are easier to learn for newcomers.