Lanugages with built-in rules/tests?

So if you can define type classes in Haskell, what if you could then attach requirements to them in the way of tests? You could then enforce whatever e.g. algebraic rules you want on equality or partial ordering. Have it really become part of the original type class definition, and somehow enforce that things pass that test before they can be marked as belonging to that class. A key think is to have it be part of the base language, not some optional addition. (Sure, there would be usability issues to iron out :-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Have you looked at Cayenne?

Have you looked at Cayenne? If you read "test" in your post as "proposition" and take "somehow enforce that things pass that test" to be "require proof of these propositions", then it pretty much supports this (look at the examples in the paper). Other dependently typed languages do similar things, but this one is pretty Haskell-ish.


I think I've looked at it before, now that you mention it, because of other LtU discussions, but I wasn't successful in formulating a Google query this time around that brought it up :-). I'll go learn.

Follow-up question: are such things not used because they simply require "too much" effort to learn and use? (Edit: ah, or because they aren't really maintained? Cayenne looks like it is a bit on the dead side (although such judgments are all relative). It is probably a catch-22.)

New breed of dependently typed languages

Cayenne was a very old project. There are new languages that fill a similar gap. Both of the below are Haskelly.

One that's been discussed several times on LtU is Epigram.

Another, less discussed on LtU and closer to a theorem prover, is Agda, a language closely related to Cayenne, and a newer, more programmery Agda 2.

Both of these projects are very alive and well.

Proof carrying code

is clearly relevant and even turns up work by an old college professor of mine. chagrined.

This is the Domain...

...of proof assistants and, as Matt M pointed out, dependent types.

The critical thing to be aware of is the Curry-Howard, or Curry-de Bruijn-Howard, Isomorphism, which tells us that theorems are types, and proofs are programs, and vice-versa. This might have remained just an interesting theoretical result, but de Bruijn, starting with AutoMath, and many others since, have actually put the isomorphism to use. It's the reason, for example, that you can extract Scheme, OCaml, or Haskell code from proofs in Coq.

So what's the difference between a proof assistant like Coq, and a dependently typed programming language like Cayenne? Only a few things, really:

  1. Proof assistants tend to be allergic to non-termination. In Coq, functions can only be "simply," that is, structurally recursive. Some dependently-typed languages, such as Cayenne, have Turing-complete type systems: it's possible for the compiler not to terminate while compiling your code.
  2. Focus and therefore approach: the process of forming proofs is quite different from the process of programming. It usually looks and feels a lot more like natural deduction like you probably have done in school. In Coq, there is a tactic called "refine," though, that feels more like programming: you basically provide the skeleton of the proof, which is, remember, the same thing as a function, and you fill in some blanks based on feedback from Coq. By contrast, a dependently-typed programming language is a programming language, and you have to squint just as hard to see how it relates to proving things. See Djinn for an example of this.
  3. There are several explicit moves afoot to make the distinction even blurrier. See Concoqtion and the YNot Project for examples.


natural deduction like you probably have done in school

drat, i think i dropped that class! ;-)

CHdBLL Correspondence

The Curry-Howard-deBruijn-Lambek-Lawvere correspondence... I'm surely forgetting someone.

weaker, but worth a mention

If you read "test" in your post as "boolean predicate" and take "somehow enforce that things pass that test" to be "check at runtime that the predicates are true else throw exceptions", then Eiffel and its Design by Contract are worth a look.

Proof at compile time is better than testing at run time, but testing at run time is better than nothing. And it's definitely part of the base language. The integration is quite nice.

Constraints in Frink

Similar to ned's comments above, my language Frink has runtime-checked constraints on variables. These constraints can even take the form of a user-defined function. For example, if I only want to be able to assign prime numbers to a variable, I can do that by defining a simple, one-argument function that returns true if the condition is met, false otherwise:

prime[x] := isInteger[x] and isPrime[x]

Then, that constraint can be applied to any variable when declaring it:

var x is prime = 2

In this case, the initial value is necessary to ensure that x contains a prime value at all times.

After that, you can assign values to x, but it will produce errors if the value is not a prime number:



Error in evaluation: BasicContext: Cannot set symbol a, ContextFrame threw exception:
  Constraint not met: function prime[4] returned false.

There are more constraint types that can be set. For more information, see the Constraints on Variables section of the documentation. For example, you can constrain a variable so that it is only allowed to contain values which have units of measure of power, or energy, or time, or something similar. (Frink tracks units of measure through all calculations.)

Re: run- vs. compile time

It would be neat to be able to write rules and have a system which then automagically did the right ones at compile time and the right ones at runtime. Perhaps also with some control by the human so that you could say "I know this code is going to be used with 3rd party libraries at external sites, so I can't do whole program analysis, so please include the compile time checks with the runtime as well for extra paranoid safety" vs. "I'm using this locally only and speed is of the essence, so don't repeat the compile time checks at runtime" or "Actually, I hate waiting for the compiler because it breaks my concentration and flow, so make it all into runtime checks please".

Many thanks to all

Much food for thought, yum!

One possibly lame thought I was having was: how to get a really simple quick and dirty solution to what I'm thinking. Like, what would be the smallest hack we could do to turn Java or Scala into something with such a testing requirement?

a) The "super" type has with it annotations about what rules must be met.

b) The sub-types write generators.

c) The compiler uses Quick Check style stuff to run the tests and generate a compile time error if they fail. (And perhaps there is some reflection involved to allow for things like testing Java's Object.equals() which must support super/subclass equality testing?)

c) Lots of ways to choose to "dial in" how anal this is; Is it run one time to generate a license? Is it run on every build?

The main thing is to absolutely require that implementations of type classes have been tested, enforced by the language at compile time, rather than runtime. (This could obviously doom the language to never be used. ;-)

use a plugin

I haven't used it yet, but scalac does have the ability to bind in a plugin, and you can choose where in the sequence of compilation phases the plugin operates. You can do just about anything you'd like in a plugin.

Quick example in Coq

I have a quick example in Coq on my blog. This should give you an example of how this can be done in a dependently typed language. I'm afraid it assumes you know what a monad is. It might be nice for someone to do an example for more familiar objects.

Same with typeclasses instead of modules


It just uses dependent records there, as available in any proper dependently typed programming language. Note that Coq's module system is not implemented on records but is a whole different part of the system. One thing it gains by that is phase distinction, the obvious drawback is that modules are not first-class.

In the example, the Class keyword declares a class, the Program Instance keyword declares an instance. If some fields are missing they are turned into obligations. Ie, one has to interactively prove the corresponding goals.


I haven't really used classes in Coq yet, so it is interesting to see this example. I have yet *another* example using dependent records in Coq. Are classes just syntactic sugar for this approach?

Is it good for there to be so many ways to do the same thing? Do we need modules at all in Coq? Maybe dependent types make them redundant. Are there advantages to using modules?

Classes and modules

Typeclasses are just syntactic sugar for dependent records, and they'll be available in the next release. Modules provide phase distinctions not currently available through dependent types, although a program analysis could alleviate this problem. Also, a powerful module system based on refinement for example would not be as easy to encode using dependent types I guess.