Putting Curry-Howard to Work

via LtU forums:

Putting Curry-Howard to Work

The Curry-Howard isomorphism states that types are propositions and that programs are proofs. This allows programmers to state and enforce invariants of programs by using types. Unfortunately, the type systems of today's functional languages cannot directly express interesting properties of programs. To alleviate this problem, we propose the addition of three new features to functional programming languages such as Haskell: Generalized Algebraic Datatypes, Extensible Kind Systems, and the generation, propagation, and discharging of Static Propositions. These three new features are backward compatible with existing features, and combine to enable a new programming paradigm for functional programmers. This paradigm makes it possible to state and enforce interesting properties of programs using the type system, and it does this in manner that leaves intact the functional programming style, known and loved by functional programmers everywhere.
The paper is short and reasonably easy to read, also the examples are very realistic - check section 13 as a starter.

Getting half through the paper made we willing to play with implementation, getting to the last page made my hands itching to implement my own PL with a similar type system.

You are warned :)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Picking the nits

One little change could improve understanding of section 13, though: think of Plus/Minus as Active/Passive, or Master/Slave. Not sure how to call Polarity in this case... Mode?

Anyway, this is just a minor (and subjective) note.

regarding example 13

Infopipes is something like I do everyday. I work at Apple in CoreAudio and the Quicktime folks are down the hall. It is an interesting example but I don't think I could put it to work in a useful way yet.

The common kinds of errors people have are not ones of type or polarity mismatch, they are usually misstating the type to begin with on both sides of two connected ends, which would mislead even a static checker. An audio format stream type is not something simple like Int, it might be something like "linear PCM at 44100.0 Hz, signed 16 bit stereo, little endian, 2 channels, interleaved" or "linear PCM at 96000.0 Hz, 24 bits per channel unpacked high aligned in 32 bits, big endian, 5.1 channels in order: (left, right, left surround, right surround, center, LFE), noninterleaved" and with compressed formats and muxed streams you get even more situations. These audio stream descriptions could be reified into types I suppose, but it remains that the stream types are complex to state, that one must take care to state them correctly, and that consistently incorrect statements could lead a format checker to validate an improper construction. And, practically you need to connect things at runtime, so you still need dynamic checks.

polarities and control flow

Interestingly (or perhaps obviously :) this Infopipe polarity thing can be used to classify the way functions are called in different types of control flow:

evaluation styleinputoutput
eager, applicative orderpassivepassive
continuation passing style / data flowpassiveactive
lazy, normal orderactivepassive
message passing concurrencyactiveactive

Eager functions are passed arguments and return control to caller - passive on both ends.
In continuation passing style, functions are called and then call their own successor.
Lazy functions are called for output and call upstream for their inputs.
In message passing concurrency like Erlang you have to call receive to get input and call send to send output - active on both ends.

Polarities in the type system might make for an interesting language that mixes data flow and goal directed evaluation more fluidly than, for example BDL.

Pipes should not have polarity

Yes, good comparison. Note though that function declarations should be polarity independent. That's what I found disappointing in the Infopipe example. Pipes don't have inherent polarity (cf. the real world), they get their polarity from the polarity of the sources and sinks.


Pipes don't have inherent polarity (cf. the real world), they get their polarity from the polarity of the sources and sinks.

Sure, but part of this requirement is supported by the example. Both Filter and Map are polymorphic in polarity, but their types are, well, a constraint stating that if you push the trasducer, it will push, if you pull it, it will pull, and to match different polarities one uses Buffer and Pump. However, I still suspect I don't understand the concept of polarity, as IMO either Map/Filter or Buffer/Pump seem to me to have wrong type. The types (as propositions) still have to be mapped to the problem domain, which I don't understand in this case.

buffers and pumps

I don't know if this helps, but processes in Unix or Erlang are pumps in the Infopipes terminology. They are active at both ends. In Erlang you have to call receive and the ! send operator. In Unix you call read(), write(). In order to chain pumps together you need buffers to match the polarity: mailboxes in Erlang, pipes in Unix.


Now I got it. Cannot figure out in what specific way I misunderstood it the first time. Demonstrates how much work (and mistakes) actually happens beyond formalizable (or even palpable) issues. Freud to the rescue :-)

What good is a program as proof?

I'm no logician so go gentle, but what's the point? If a program is also a proof, I can prove that the program does what it says it does but that doesn't help me prove that the program does what it needs to do. For that I'd need a model theory (I think that's the term) that defines how statements in the logic relate to the real world, and how do you verify the model theory? You have to run tests. So at the end of the day, a program that's a proof has to be tested. We're now testing the proof, rather than the program, but the proof is the program, so we're back to where we started, aren't we? Or am I off the wall.

Well, since the customer is not likely...

...to know exactly what they want, tests don't prove anything either - since we're still not sure if the software meets these ever shifting and loosely defined requirements. And since the tests don't verify the customer requirements, then we should dispense with any notion of formal or informal proofs.

And delivering the software t

And delivering the software to the customer will change the situation that prompted the original, poorly understood requirements anyway, resulting in new requirements that the software doesn't yet meet. This is well understood in (some parts of) industry, which why there is such an interest in evolutionary delivery, XP, etc.

I don't think this means that tests or proofs are pointless. Rather it means that programmers and users need to work very closely together and that there should be continual feedback so that programmers learning about the domain is fed back to the users and their new understanding is fed back to the programmers.

You need a specification

Every program does what it says it does unless there's a bug in the compiler or hardware or something. The question is not whether the program does what it say it does, but rather, what does the program say it does? What you want is some specification that states clearly and succinctly (rather than in the long detail of the code) what the program does. When you run tests you are using tests as a specification - you are simply specifying that the program must complete the certain operations you are testing in the manner specified by the test.

From reading the paper my understanding of their claims is roughly translatable as follows: types are propositions (specification of what the program does) and programs are proofs; the problem is that with current type systems expressing propositions as types doesn't make for very clear propositions. Their extended system is more expressive and you can write your propositions more clearly. That is you write your specification of what the program needs to do in their extended type syntax, and the program then becomes a proof that it does this. This is perfectly akin to writing a test as a specification of what the program needs to do and then running the program and getting the desired result as proof that it does this - just with the added bonus that you can write a more general specification rather than a case by case one.

It's actually TWO programs (= TWO proofs)

In my way of thinking, a type system is a programming language at the type level; Omega gives you the ability to write programs at all levels: type, kind, ... The essential idea is that types provide a second version of your program; one that operates on value abstractions (the abstraction is a Galois connection among domans, see Cousot 1997). So, with two programs, the checker's job is to ensure that the two versions (the value-level and the type-level) correspond. Omega checks all the levels.

Usually, people only provide type annotations, because the checker can build most of the program (i.e. the proof of type evaluation). Inferable type systems mean that the checker can build the entire proof/type-level program from the value-level program (NB. not all type-systems are inferable). Dependent types work in this description by explicitly breaking the (often overlooked) belief that programs are two-phase (cf. Cardelli 1988). REPLs give another great example of how strict two-phase distinction forces extra closure properties (witness the heavy use of `rec' definitions in SML/OcaML--but I believe that System E might reduce this).

But, since types provide property-level descriptions (programs), they're *tremendously useful* for checking that the value-level program meets these properties. These properties might include proving that qsort provides a permutation of the original values into a monotonic order; most type systems only let you prove that you take a list/array of elements and return a list/array of elements of that same type. In Omega, you can write a type program that expresses that valuable qsort property, or a type program as part of your interpreter that says your interpreter enforces a sound type system.

Or TWO specifications

In my way of thinking, a type system is a programming language at the type level

Alternatively, you could say that a programming language is a specification language at the machine level. After all, a program is simply a specification of the behavior that we want a computer to exhibit.

Now, I'm not trying to claim that a programming language is necessarily a "good" (for some value of "good") specification language. I'm simply trying to point out that specification languages and programming languages are (in some senses) essentially the same thing. For some reason people seem to feel the need to separate them somehow - perhaps because "specification" isn't all that popular. But really programming is just a form of specification. The imaginary line between programming and specification becomes particularly blurry once we start talking about functional and logic programming languages, and high-powered type systems.

Ultimately what we are doing, no matter what "level" we are operating at, is formally expressing notions about some pieces of data (our universe of discourse), the fundamental relationships between those pieces (our axioms), and some procedures for manipulating that data in accordance with our axioms. The goal of verification (whether by proof, static checking, or testing) is to demonstrate that there is some preservation of structure between different levels of abstraction.

A compiler is a model generator.

Chris and Leland are correct, and I believe your point can be answered even more directly. My knowledge of logic is quite rusty, but here's an attempt.

A proof demonstrates that a proposition is consistent. Model theory (the branch of logic) tells us that a consistent proposition has one or more models. I you want "a model theory ... that defines how statements in the logic relate to the real world" then the compiler is the answer. It takes a program/proof and relates it to the real world, providing a model. To verify the "model theory" or compiler, it takes testing.

So no, as regards programs as proofs, you verify them rather than test them.

The catch is that programs are not useful proofs from a superficial point of view. For example, every Haskell program/proof has the same type/proposition as its goal, IO(). In other words all a Haskell program proves is that you can do arbitrary I/O -- trivial. That's why this paper's claim to enable a program to "state interesting properties of real programs" is significant.

Logic programming

I read this as an elegant argument for logic programming.

PDF version

Here's a PDF version, for whatever it might be worth.

Oh, wow

I'm a LISP geek with a well-known distaste for type systems--they don't catch the kind of errors I tend to make, and they don't support the kind of polymorphism I prefer to use. Plus, many of them are conceptually klunky and don't give me much benefit for my effort. (And don't get me started on OCaml's +. operator, or Haskell's fromInteger. Really.)

But this type system is mind-blowingly cool. It's expressive enough to let me say some really interesting things, and it's flexible enough to handle some pretty funky programs.

I'm probably well on the road to strongly-typed damnation already. ;-)

Next step: GAT?

Now that we have GADTs and kind definitions, when does Tim sell us the following construct:

kind StringTheory :: *1 ~> *1 where
Strong :: Hadron a -> Hadron b -> Neutrino (a -> b) -> Boom (a, b)
Weak :: Lepton a -> Lepton b -> Neutrino (a -> b)
Gravity :: .....

The "Generalised Algebraic Kind" (TM) of Everything!

Just joking, of course, but maybe fundamental forces are described by a similar typed system as formulated in this paper.

I like it.