## Invariants/Contracts vs. types

I noticed the new (to me) language "Qu" chooses to implement type-checking via invariants (see http://centrin.net.id/~marc/example.html).

(invariants, contracts, validators - all refer to the same thing here. any further names?)

Since this is more expressive and potentially "safer" than any typing systems I'm aware of, I'm wondering

• how much prior work has been done on such systems?
• why isn't this more common as a safety mechanism in dynamic languages?

• and perhaps more difficult:
• what is the "most complex" STATIC type system currently around, in terms of doing compile-time analysis for such value-constraints?

This appears to have been discussed very briefly here, but maybe I missed something.

## Comment viewing options

### Probably the most complex sta

Probably the most complex static typesystem available is that of Qi, a language built on Common Lisp, with a Turing-complete typesystem. This does incur the penalty that the typechecking phase is not guaranteed to terminate, and has no performance guarantees.

### Dependent and Generalized Algebraic Types

what is the "most complex" STATIC type system currently around, in terms of doing compile-time analysis for such value-constraints?

You might want to look at Epigram, Omega, and Cayenne

### The most complex (terminating) type systems

The most complex terminating type systems are theorem provers. For example, many proof assistants are built around things that are essentially programming languages. Look at e.g. ALF, Twelf, Coq, etc. Using the Curry-Howard correspondence (generalized) with respect to any logic gives a type system. Deciding which of these logics/systems is most "complex" is trickier. Most likely many of them should be considered incomparable.

### Barendregt Cube

It's perhaps worth using the Barendregt Cube as a launching-off point for a discussion like this. Also known as the "Lambda Cube" because Barendregt himself is too modest to call it the "Barendregt cube," it graphically relates various extensions of the simply-typed lambda calculus. That calculus is at the origin, and the axes are labelled with one of the various extensions, so that other vertices are identified by type systems that have the extensions named on the axes leading to those vertices. The result is that the Calculus of Constructions (higher-order, polymorphic, dependent types) lies at the opposite vertex of the simply-typed lambda calculus. The Calculus of Constructions is closely associated with Pure Type Systems, and more recently some effort has been made to address some complexities of Pure Type Systems in the Open Calculus of Constructions and Uniform Pure Type Systems. Briefly and informally, it seems reasonable to suggest that the most "complex" type systems are the Pure Type Systems.

### Comparing type system strength

Deciding which of these logics/systems is most "complex" is trickier. Most likely many of them should be considered incomparable.

The strength of these logics can be compared by analyzing their "proof theoretic strength". This can be characterized by, for instance, arithmetical formula strength, which functions may be expressed, or countable ordinals. I don't know much about any of these; I think the second-order polymorphic lambda calculus has ordinal strength epsilon_0 and can express all functions provably total in second order arithmetic.

But I could be mistaken.

I don't know if the strength of more expressive calculi have been analyzed. Which functions can F_omega express? How about the calculus of constructions?

Do the common additions (Coq is based on the calculus of inductive constructions) to the calculus of constructions add only syntax, or actual expressive power?

EDIT: Luo's thesis says that F_omega, CoC, and ECC all can express the functions provably total in higher order arithmetic. This does not mean that theories beyond F_omega don't add expressive power; Brian Howard's thesis gives examples of languages that can express no more functions that System F can, but can express more algorithms than System F.

### LF vs CoC

Twelf is based on the LF type theory, and Coq is based on the Calculus of Constructions.

LF = lambda calculus with dependent types
CoC = lambda calculus with dependent types, polymorphism, and type operators

Any program you can write in LF is a well-typed program in the CoC.

So why would anyone choose to use LF rather than the CoC, given that the CoC isn enormously more expressive language than LF?

Assuming I haven't misunderstood things, LF has a simpler canonical forms property than the CoC does (because it's a weaker type theory). Pragmatically, this means that you can define theorems by pattern matching, which is a more readable and declarative sytle than the tactical theorem proving common to CoC-based systems. And IIRC higher-order abstract syntax is more convenient in LF than in CoC, because you can identify the LF context with your programming language context.

### complex...

what is the "most complex" STATIC type system currently around

Complex eh? That's an odd qualifier, akin to asking what's the most obfuscated program ever written. Personally, I like the simplest thing that will work :-)

You might be asking what is a sound type system that provides the strongest guarantees at compile time, with the least amount of programmer intervention (i.e. as few type declarations as possible and as little human interpretation of the type inference results as needed).

I looked at Qu, and I wasn't really impressed. Maybe I just don't get it, but I'm not seeing how whatever they are doing is any different from, say:

  (define (print x)
(assert (and (string? x) (= (string-length x) 5)))
...)


Their type system seems to conflate run-time checking with compile-time checking. You can do similar with Scheme and a soft typing system which will remove run-time checks where it can guarantee safety.

Another interesting paper on contracts in Scheme.. here.

Sure, you can make a compile-time type checker that pretty much interprets the code. But I think the further you get to complete compile-time interpretation (i.e. where all types are known) you are getting close to automatic unit test generation. With such a sophisticated type check/inference system I doubt there is little difference between understanding and interpreting the results of that, and understanding the results of running a unit test. Push static type checking too far and I think you run into the law of diminishing returns.

In other words, programmer intervention will always be required to define what "working" means. Static typing just helps to ensure that your current definition for "working" is consitent with the definition you had last week, but were interrupted by your boss and had to sit through a four hour meeting ;-)

### Static typing just helps to

Static typing just helps to ensure that your current definition for "working" is consitent with the definition you had last week, but were interrupted by your boss and had to sit through a four hour meeting ;-)

Rather than waste time with these academic games like type systems, the practical, real-world solution is to eliminate your boss.

### Contracts vs. Types...

It's worth nothing, that these two things are complementary rather than opposed. There are things that you can easily do with types that you can't do with contracts, and vice-versa.

For example, a type system can easily check that parametric types are used correctly, which is something that is difficult-to-impossible to do with contracts. Checking that a list really is a list of integers takes O(N) time, and you simply cannot write a test that enforces that a function always returns an integer for all inputs.

Likewise, a type system is a kind of constructive logic. Even if you are working in a type system that can formalize all of math (like the CoC), what enforcing a complex invariant means is that you are constructing a proof that the invariant holds. This is often vastly more work than "spot-checking" -- the philosophy of contracts is that you fail when the invariant fails, and then assign blame to the correct component.

Getting the optimal level of reliability for a given commitment of resources and performance requirements will involve using both techniques.

### CoC

Even if you are working in a type system that can formalize all of math (like the CoC), . . .

### For every classical theorem P

For every classical theorem P, you have a a constructive theorem "(forall Q:Prop. ~~Q => Q) => P". IOW, you simply state the excluded middle as an additional assumption.

If you don't have quantification over propositions (CoC does, LF doesn't) then you can use the Kolmogorov embedding of classical logic into intuitionistic logic.

Or you can simply raise a politely skeptical eyebrow whenever someone appeals to the "law" of the excluded middle. (This is actually closest to my own prejudice, actually. Though I suppose linear logicians may be just as skeptical of my use of weakening and contraction....)

### Neophyte constructive question

What if both Q and !Q have the possibility of never terminating?

### Propositions aren't programs

What if both Q and !Q have the possibility of never terminating?

I don't think your question makes sense. The Curry-Howard isomorphism identifies propositions (i.e. Q and !Q in your question) with types and proofs with programs. Maybe what you meant to ask was: what happens if we can prove neither Q nor !Q? Well, that's exactly the definition of an undecidable proposition.

Ah, thank you.

### Functional programming and contracts

I was recently reminded of Z and it struck me as weird that contracts and their automated proofs aren't more appreciated by the functional programming community. After all, the main benefits that the functional programming is supposed to provide are easier reasoning about the program behaviour and (remote) possibility of formal proof of its correctness.

So what would happen if one were to implement Eifel-style contracts in Haskell or ML? Would they be more or less useful in a FP language than in the OO setting? Also, would simpler semantics of the surrounding language enable compiler to automatically prove or disprove some assertions?

Especially in case of Haskell type classes, the lack of invariants really sticks out. For example, here's the Num class declaration from the Haskell 98 Report:

class  (Eq a, Show a) => Num a  where
(+), (-), (*)  :: a -> a -> a
negate         :: a -> a
abs, signum    :: a -> a
fromInteger    :: Integer -> a


A couple of paragraphs under the code, the report states:

The functions abs and signum apply to any number and satisfy the law: abs x * signum x == x

This is a contract that every instance of Num class must fulfill. Though it's stated formally, it's only available for human consumption.

This is not the only example by far. Most class declarations come with laws. Even the omnipresent class Monad is supposed to require every instance to satisfy three specific laws, but Haskell offers no way to specify these laws other than in a comment.

### QuickCheck...

QuickCheck gives you a way to specify laws and test them. The QuickCheck in GHC includes isAssociative, is Commutative, isTotalOrder, and more.

The second QuickCheck paper demonstrates a model based specification, and mentions similarities to Z at the beginning of section 5.

### Not quite what I had in mind

I've skimmed the paper. QuickCheck is certainly better than nothing and after this acquaintance I may end up using it in my own code. But it's not what I had in mind at all. A tool like QuickCheck can be written for any language, be it a functional one or not. In fact, it would probably feel more at home in Python than in Haskell. I was thinking of direct functional language compiler support for user-defined assertions about user functions' properties. The same features of functional programming that have made it the testing ground for advanced type systems should also make it easy to test advanced specifications.

I'll try to give an example. Treat the following as daydreaming, it may very well be theoretically impossible.

fac :: Integer -> Integer
fac 0 = 1
fac n = n * fac (n-1)

assert (terminates (fac))

The assertion above is obviously wrong: the function fac does not terminate for argument n < 0. The assertion can easily be proven wrong by something like QuickCheck, but that assumes we run it first. What I'd like the compiler to actually disprove the assertion and show me a counterexample as soon as I try compiling. Then I could modify the code as follows:
fac :: Integer -> Integer
fac n | n < 0 = error "Expecting a non-negative argument."
fac n = fac' n

fac' 0 = 1
fac' n = n * fac' (n-1)

assert (terminates (fac))
assert (n >= 0 ==> fac n == fac' n)


Now we have a provably correct function, and more than that: the second assertion enables compiler to optimize a call like fac 5 to fac' 5, because the assertion says the two expressions are equivalent for non-negative n.

Some assertions will always be outside the reach of any theorem prover. But there are ways around that:

• If the prover cannot prove a theorem (i.e., an assertion), it can tell the programmer what lemmas would help prove it. Then the programmer could add some of those lemmas to the code as assertions, and re-run the checker.
• A compiler pragma or command-line option could say that an unprovable assertion should just be trusted.
• Alternatively, an assertion that cannot be statically proven could be:
• checked at run-time every time a function is invoked, or
• accepted unless it can be disproven, or
• tested against random inputs, or
• all of the above.
• Assertion proving would probably be a time-consuming process, so proofs would have to be cached alongside the object files. A separate tool could be provided to browse the proof files and the strategies used to get them. The programmer could use that knowledge to add more lemma assertions to the code and improve the process.

But the gist of what I was saying is that the functional programming community should put their money where their mouth is. It's one thing to say that the well-typed Haskell program "just works", and quite another to have an actual proof.

### User functions' properties and type theory

You can prove just about anything for your functions by encoding the information into the type system. I don't want to give out spoilers for "Types and Programming Languages", but a type system is nothing more than a simplified automated proof assistant.

There already are type systems and encodings than can statically determine memory usage and other resource allocations like network usage. Region allocation is one good example of that.

Epigram is one step closer to being provably correct. I seem to recall that Epigram always terminates, at the cost of not being Turing complete.

You may also want to investigate human directed automated proof programming. I've only heard of it, but it would probably work, according to the Curry-Howard isomorphism.

Your description of checks moved from compile time to run time is called 'soft typing'.

If you like this kind of thing, you'll like reading Types and Programming Languages by BC Pierce, and its sequel.

Personally, I believe the FP community has put their money where their mouth is... you can tie your program tightly onto a type system skeleton, or you can skip the whole thing and use Dynamics instead.

### power of type theory and my current research

I don't think the restriction on termination necessarily implies not being turing complete (it just means that you can't write an OS or other "always on" program). There was a series of papers (and a disseration) in recent years about general recursion in Type Theory, though I don't remember off hand who wrote the stuff.

Additionally, i'd argue that all these other applications could be written if one relaxes the type system to allow the toplevel driver loop function to not be structurally recursive.

Part of my current research is exploring how to generalize cps and closure conversion style compilation to languages with dependent types. I'm actually giving a talk on it at this years Nordic Workshop on Programming Theory.

Benjamin's book's are really great stuff.

### Uhm. That a language is turin

Uhm. That a language is turing complete means that it can do everything that can be done with a turing machine/lambda calculus. And as these can express non-terminating programs a language that always terminate is per definition not turing complete. Although you can of course discuss wether this really is a big hinderance or not. and if the restrictions on the language makes any terminating programs impossible.

### A little late

But what I think you're after is EML which extends Standard ML to allow axioms in signatures.