Simply Easy! (An Implementation of a Dependently Typed Lambda Calculus)

Simply Easy! (An Implementation of a Dependently Typed Lambda Calculus)
Andres Löh, Conor McBride and Wouter Swierstra

We present an implementation in Haskell of a dependently-typed lambda calculus that can be used as the core of a programming language. We show that a dependently-typed lambda calculus is no more difficult to implement than other typed lambda calculi. In fact, our implementation is almost as easy as an implementation of the simply typed lambda calculus, which we emphasize by discussing the modifications necessary to go from one to the other. We explain how to add data types and write simple programs in the core language, and discuss the steps necessary to build a full-fledged programming language on top of our simple core.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Time to become a

Time to become a contributing editor, don't you think?

Haskell's hodgepodge

I love papers that come with source code for interpreters, especially when it helps in exploring the theory.

The beginning of the introduction provides an interesting perspective on this paper:

Most Haskell programmers are hesitant to program with dependent types. It is said that type checking becomes undecidable; the phase distinction between type checking and evaluation is lost; the type checker will always loop; and that dependent types are really, really hard.

The same Haskell programmers, however, are perfectly happy to program with a ghastly hodgepodge of generalized algebraic data types, multi-parameter type classes with functional dependencies, impredicative higher-ranked types, and even data kinds. They will go to great lengths to avoid dependent types.

This paper aims to dispel many misconceptions Haskell programmers may have about dependent types. We will present and explain a dependently-typed lambda calculus λΠ that can serve as the core of a dependently-typed programming language, much like Haskell can be based on the polymorphic lambda calculus Fω.

Dependent types

The authors are certainly correct that dependent types provide a cleaner solution than the hodge-podge of features found in more widely-used languages.

I'm afraid they are also correct in the suggestion at the end of the paper that regaining the expressiveness needed for large-scale programming may require the introduction of general recursion, and the possibility that the typechecker will loop in some circumstances.

Ultimately, we need to embrace partiality in the general case of typechecking. Large-scale programming will never be done in a total language, because the burden of proving your program total imposes a development cost multiplier of several orders of magnitude. And, when you mix types and values to gain expressiveness, any partiality in the value language necessary percolates up to the type system.

So, we need to grow comfortable with partial type systems. I feel the lack of significant progress in type theory in the past decade is largely a result of refusing to embrace general type recursion, which causes researchers to focus in on microscopic details of proving totality for slight improvements on existing type systems, rather than exploring the wider possibilities.

Can of non-terminating worms

So, we need to grow comfortable with partial type systems. I feel the lack of significant progress in type theory in the past decade is largely a result of refusing to embrace general type recursion, which causes researchers to focus in on microscopic details of proving totality for slight improvements on existing type systems, rather than exploring the wider possibilities.

I understand where you're coming from Tim, but do you really want to introduce another source of non-termination into the development process? Can you imagine your typical programming team mucking about with a type-checker debugger, trying to figure out why their app won't compile?

Even for those of us with a lot of type theory under our belts would probably find that a wee bit painful.

It would be nice if type-checking could be counted on to solve a whole range of problems without invoking the halting problem, but I think realistically the best we can do is to chip away at the edges that can be shown to terminate.

Maybe while we're chipping away, someone will make a big breakthrough, and we'll get our wish, but the low-hanging fruit seems to have been already picked.

Are there specific features that, having thought it through, you think would be "safe" to use in normal development circumstances without undue risk?

Non-terminating compilers are not new...

There are lots of languages and compilers which do not terminate on all programs (or take so long they might as well not terminate):

Common Lisp and Scheme compilers can fail to terminate if the user writes a non-terminating macro.

GHC (the Glasgow Haskell Compiler) implements type-system extensions which, if enabled, can make the compiler fail to terminate on some programs.

C++ compilers can be forced to take an arbitrarily long amount of time doing template expansion.

Hindley-Milner type inference can take exponential time in the length of the program text, so ML and Haskell compilers can be made arbitrarily slow.

Somehow people muddle along with all of these systems. One key is that in all of these cases, it's difficult for a less-skilled programmer to write "bad" programs (that make the compiler take effectively forever) by accident; they use features (write your own macros, create your own templates, use fancy type-system extensions) that are usually avoided for simple programs, and are used much more often in libraries than in callers of libraries. (Hindley-Milner typechecking seems not to be a problem in practice in any case.)

I hope (and expect) that there are interesting uses of Turing-complete type systems that would have the same properties: most people would never run into the non-terminating cases, and the people who do would know what they are doing.

Also, note that if you have a powerful type-level language (like Coq), carefully restricting it to be Turing-incomplete and terminating doesn't help much: the compiler might terminate in theory, but if you can make the compiler compute factorial(factorial(1000)) in unary, it's not going to terminate in practice.

We lived in a lake...

There are lots of languages and compilers which do not terminate on all programs (or take so long they might as well not terminate)

Many of your examples have one thing in common: though there are people who muddle along with them, they are normally considered "harder" to develop with "in the field", or set aside for "advanced use".

In particular, macros, templates, and GHC type extensions are optional extensions of the language: you don't have to use them, and you do so "at your own risk".

A dependent type system in order to be meaningful and useful is going to be mandatory: you won't be able to avoid it if you want to write in the language in question.

And as Tim himself notes, as you add refinements to your types the dependent types you create are going to get more complex and harder to reason about, even if you don't mean them to.

I really do think we are talking about a different order of risk on top of the one that things such as macros and templates already bring to the table.

I do not think this is true

I don't think it's necessarily the case that adding refinements will add that much extra complexity. If you do it GADT style, it will be unpleasant, but you don't have to do that. You can potentially write things like:

when :: Bool -> Type
when b = if b then Unit else Void

foo : n:Nat -> (when (prime n)) -> (when (n > 50)) -> result
...

and then treat each constraint independently.

dependent type system mandatory?

A dependent type system in order to be meaningful and useful is going to be mandatory: you won't be able to avoid it if you want to write in the language in question.

I'm not sure what you mean here. In a sense, it's trivially true; if you write a program in a language with a dependent type system, you have to use that type system, just like if you write a program in ML you have to use Hindley-Milner.

But I don't agree that it has to pervade your programming experience. In fact, I would think the opposite would be true: for a language with a dependent type system to be useful, you have to be able to spend a lot of your programming time not thinking about types at all. (I think the same is true of languages without dependent type systems, as well. After all, unless you're writing a very strange program -- like the Coq proof of the four-color theorem -- the meat of any program is at the value level.)

And in fact I find that when I write Haskell code, I don't spend a lot of time thinking about types; the compiler handles that for me. I don't see why a dependent type system would have to be more intrusive than the Haskell type system for normal programming.

The types depend on values

the meat of any program is at the value level

This is the problem: with dependent types you are mixing your values into the type system; you can't reason about the types as a system wholly independently from the possible values that can be embedded in it, or for that matter the values wholly independently from the types.

This is partly what makes dependent types so powerful, but I think also problematic from both a practical and theoretical point of view.

And why I don't think Tim is going to get his wish (even though I wish it along with him).

I think the usefulness of

I think the usefulness of dependent type systems far outweights its potential problems. Dependent type systems allow for far better optimizations of code and allow for far better error catching at compile-time than non-dependent type systems. Sometimes the compiler may not terminate during processing of a dependent type system, but that is not really very important: it is the programmer that has introduced the recursion, so the programmer is responsible about it. As others have pointed, the C++ template meta-programming system 'suffers' from this, but it opens the door for many interesting capabilities.

Reasoning about types...

you can't reason about the types as a system wholly independently from the possible values that can be embedded in it

But most of the time, I don't reason about the types at all...that's the compiler's job. I only reason about types and the type system in two cases: 1) I'm trying to come up with a new use of the type system which is sufficiently different than anything I've done before, or 2) the compiler spits out a type error where it is not immediately obvious what the problem is. Both of these are somewhat rare events; certainly they add up to a tiny fraction of my total Haskell programming time. I don't see why this would be different in a language with dependent types.

It seems easy...

I don't see why this would be different in a language with dependent types.

I'm afraid this seems like a speculation by analogy on strongly normalizing HM type checking. The whole idea that Tim was proposing was doing away with the strongly normalizing guarantee.

If you take away this guarantee (and make use of the power it offers) you will certainly have to reason about typing termination (and hence the type system) the same way you have to reason about program termination.

You may want to read Conor McBride's papers on the issue if you don't find my arguments convincing.

In defense of Hindley-Milner...

Exponential behavior only arises from deeply nested polymorphic let expressions, i.e.

let x = let y = 3
            z = let foo n = n + 1
                 in foo y
         in y + z 
 in x * x

Expert or not, nobody writes code like that. Thus Hindley-Milner is efficient in practice, as you mentioned. (I seem to remember the term "quasi-linear" from somewhere...)

Exponential behaviour

In fact, HM type inference is only exponential when the types get exponentially large. So this occurs only for programs you couldn't practically write without type inference in the first place.

(That is, your example does not exhibit exponential behaviour yet.)

C++

C++ already has this problem, so I'd guess quite a lot of programmers are comfortable with the idea (or oblivious to the problems).

Nontermination and Soundness

You probably already know this, but type-erasure for a dependently-typed language is unsound in a partial language. The best you can do is to actually evaluate all the types and witness terms in a program, to make sure a "proof" isn't somebody lying with nontermination. It's like dynamic typing, except that the runtime "tag" checking fails only by looping! That's the situation with passing GADTs as evidence in Haskell - evaluating them is required for type-safety. Conor McBride explains. The Epigram paper Inductive families need not store their indices shows how much you can erase in a strongly-normalizing language.

I was talking about partial and total languages above, but what really matters is whether an individual term is strongly normalizing. You can use general recursion in your calculations and still only use total expression in types if your type system separates them. Thorsten Altenkirch and others are working on a monad for partiality, with that in mind (slides).

This fits with the pay-as-you-go typing Epigram is aiming for - you can define simple algebraic data types in a dependently typed language and it should be no more trouble then using ML, or you can define better specified types, justify functions a bit more, and get stronger guarantees. Similarly, you should be able to code with general recursion if you want to, and suffer the costs if you use that code in types, or maybe work a little more to show something total, for easier reasoning about it. I'd like to think the bits past 11 on Paul Snively's dynamic typing static typing slider are falling into place.

Type erasure requires totality

You've made a very important point here: type erasure requires totality. Any dependent-typed expression that may diverge must be fully evaluted at runtime, otherwise the type system is unsound.

If the language is ambivalent about partiality, this means dependent types carry a significant runtime cost.

The answer is that dependent-typed, performance-critical languages need to have some sort of effects-typing system so that the runtime can erase types whose computation is conservatively known to be total. In the general case, it won't be able to ascertain that, but in the typical cases of simple dependent types (e.g. the Haskell-like subset), it's pretty straightforward.

Proof-Carrying Code

Tim, how does this square with a desire to implement proof-carrying code in order to provide security guarantees with respect to code, either dynamically loaded or received over the network, from untrusted sources?

Proof-Carrying Code

For security, it just means that non-total type expression can't be erased, but must be evaluated at runtime to preserve the semantics. It's a performance hit, probably not a huge one.

Can you imagine your typical

Can you imagine your typical programming team mucking about with a type-checker debugger, trying to figure out why their app won't compile?

Hey, C++ templates are a Turing complete compile-time construct, hence they can loop. Industry-wide, maybe 0.01% of C++ debugging time is spent determining why a template expression loops. Much as we all hate C++, it's the world's leading mainstream programming language today, and the partial typechecker is absolutely not a barrier to adoption.

The same can of worms?

Hey, C++ templates are a Turing complete compile-time construct, hence they can loop.

They may have the power to loop, but, though I haven't used them myself in a very long time, my recollection is that one normally doesn't call upon that power very often (perhaps ever in a well-designed template library). Has this changed?

If it hasn't, then the template language has more power than it really needs to fulfill most or all of its purpose, and could probably be replaced with a guaranteed terminating system without much loss of effectiveness.

What you seem to be proposing sounds quite different to me: like you want to routinely call upon that power, bringing along its full complexity by necessity. Is this really a comparable situation to C++ templates?

modern C++ style and compile time nontermination

They may have the power to loop, but, though I haven't used them myself in a very long time, my recollection is that one normally doesn't call upon that power very often (perhaps ever in a well-designed template library).


In chapter 3 of Modern C++ Design by Alexandrescu he introduces template type lists, essentially laying the groundwork for a Lisp in C++ templates at compile time.


"Typelists are a C++ tool for manipulating collections of types. They offer for types all the fundamental operations that lists of values support."

Loops allow tree definitions

They may have the power to loop, but, though I haven't used them myself in a very long time, my recollection is that one normally doesn't call upon that power very often (perhaps ever in a well-designed template library). Has this changed?

Self-referential templates (i.e. recursive templates which are the same thing as loops) are useful for the definition of tree structures at compile-time. One application is the definition of parsing combinator libraries (see my recent post on LtU about the YARD parsing framework).

I'm not sure you could write something like YARD with a weaker (e.g. guaranteed terminating) template system, but I could be wrong.

The power to loop

They may have the power to loop, but, though I haven't used them myself in a very long time, my recollection is that one normally doesn't call upon that power very often

An example library that uses looping constructs at compile time is boost::python. I use it every day, as do many others. It would be tricky to modify C++ to remove non-termination but still keep this power.

Strong Normalization is a bliss

Everytime I see this discussion about non-termination pop-up I wonder, what kind of functions do you have in mind that don't have a structural or well-founded recursion argument ?

General recursion

For example, consider a dependent-typed program whose compilation constructs a parser from a parser definition in a string (say, in BNF format). Then, when you run that program, it parses another file using that parser. Here, we're moving something that is traditionally done at runtime to compile-time, which is sensible here since constructing a parser is a purely functional operation.

Well, depending on what the string contains, compiling this could loop. So, to write the dependent-typed parser generator using wellfounded constructs, you can't simply write a function from strings to parsers, you need to construct a provably wellfounded parser by hand using constructs like primitive recursion.

More importantly, it's far easier for real-world programmers to write any sort of code using general recursion than to restructure everything using e.g. primitive recursion or this paper's FoldNat/NatElim constructs -- perhaps an order-of-magnitude easier. Such productivity multipliers are a big deal for large-scale programming.

In those cases, programmers would much prefer to write code easily using general recursion and to debug the very rare cases where compilation doesn't terminate, than to spend a much greater effort writing these algorithms using unfamiliar induction constructs.

Practical matters

I think you should be able to write a function from strings to (terminating) parsers and have the choice of building a proof that this function terminates to be able to compute with it. You could even have the choice to run the function assuming an hypothetical order makes it well-founded. Whether it has to be in an untrusted part of your system (eg, after extraction in Coq) or not (as in a turing-complete programming language) is a design choice. I personnaly am more inclined to the former as I want to certify programs but that's my taste :) I agree that proving that the parsers you build will terminate may be a very complex task, but that's exactly why you have dependent types: to prove complex properties.

The question of what can be allowed automatically is an active research area, and there are promising solutions to the problem, notably sized types could help a lot. I'm still not convinced many everyday code uses complex termination arguments. I think you can give decreasing measures most of the time. Now programming with eliminators is certainly not the way to go, but pattern-matching is more or less equivalent to it and used pervasively in ML to structure recursive programs, so the solution's already there !

complex termination arguments

I'm still not convinced many everyday code uses complex termination arguments.

Code that would require complex termination arguments is probably quite rare, but it definitely exists.

I recently wrote a routine (for finding the real roots of a polynomial) where the termination argument uses algebra, calculus, and number theory. (I couldn't even produce the whole termination argument without references, although I can point at the papers you would need to read.) Also, it uses libraries written by many different people at different times in different programming languages; subtle bugs in those libraries could make the routine loop.

And even then, all that I can prove is that if I had a source of random numbers, the routine would terminate with probability 1. Since I use pseudo-random numbers, there are probably inputs that the routine does not terminate on (although I'm not at all worried about it, since those inputs would be far too large to be practical anyway).

The example you give with

The example you give with parsers is actually one I'd be quite happy to take a shot at - with a reasonable understanding of the parsing process I'm fairly sure it all drops out. It would require parsing into a grammar before compiling the grammar into a parser, but I can't help thinking that this is a good thing anyway!

That aside, building things in terms of more specific higher-order functions is often easier than either general or structural recursion. I can't help thinking that that's the appropriate trade-off.

Parsing

You can't do directly with primitive eliminators, but there's a really straightforward and beautiful trick you can use to handle parser generation and many similar algorithms (such as dataflow analysis), where you need to iterate to a fixed point. The recipe is:

1) generate the step function by recursing over the grammar.
2) Show this function is well-founded.
3) This proves iteration to a fixed point terminates.

Here, I give an example from parsing -- figuring out whether a grammar matches the null string or not. For simplicity, I only allow a single nonterminal, so the grammar type is equivalent to the form S ::= ... | ... | ... with S as the only nonterminal. This isn't intrinsic, but it keeps the example simple.

type gramamr = 
  | Char of char
  | Seq of grammar * grammar
  | Empty
  | False
  | Or of grammar * grammar 
  | Self

let isnull_c g b = 
  match g with
  | Empty -> true
  | Char _ -> false
  | Seq(g1, g2) -> (isnull_c g1 b) && (isnull_c g2 b)
  | False -> false
  | Or(g1, g2) -> (isnull_c g1 b) || (isnull_c g2 b) 
  | Self -> b

(* Here, we use a non-structural recursive call, but it's justified by 
the fact that isnull_c is well-founded -- eventually you'll hit the 
base case because f can only increase v. In ML you don't track this 
fact, but you could in a dependently-typed language. *)

let rec fix f init = 
  let v = f init in
  if v = init then v else fix f v

let isnull g = fix (isnull_c g) false

I write lots of functions

I write lots of functions that are not structurally recursive. Of course, these functions *do* have induction metrics that go down, but I find tracking that explicitly somewhat annoying.

For example, here's a functional quicksort:

let rec split' x list bigger smaller = 
  match list with
  | []                   -> (bigger, smaller)
  | y :: rest when x < y -> split' rest (y :: bigger) smaller  
  | y :: rest            -> split' rest bigger (y :: smaller)

let split x list = split' x list [] []

let rec qsort list = 
  match list with 
  | []      -> []
  | [x]     -> [x]
  | x :: xs -> let (bigger, smaller) = split x xs in
               (qsort smaller) @ [x] @ (qsort bigger)

split has a structural recursive definition, but qsort does not; it depends on the fact that the calls to split return two lists, each of shorter length than the argument to qsort.

EDIT: As an engineering matter, though, I think that it's smart to stick to normalizing languages. The basic reason is that there's potentially a LOT of computation that you have to do at type-checking time, and knowing your language is normalizing means that when all you care about is inhabitation (e.g., for equality proofs), you don't have to normalize a proof term at all. That's potentially really significant for efficiency.

Sized types to the rescue

Indeed having to encode in the type or prove separately that split "preserves" the size of the input is cumbersome, but we know how to do it automatically ! Sized types (see e.g. Andreas Abel's PhD) would allow you to specify that
split : a -> list^i a -> list^i a -> list^i a -> (list^i a, list^i a)
Effectively expressing with types that the recursive calls to qsort will be ok as the size of bigger and smaller will be less than or equal to the size of xs. I know of no implementation using this technology yet but it's clearly the way to go. It even works with higher-order functions, think:
map : (a^i -> b) -> list a^i -> list b

Dependent Types: Easy as PIE

Given the discussion of the usefulness and dangers of dependently typed programming, here's a draft paper by Stephanie Weirich, Dependent Types: Easy as PIE

Dependent type systems allow for a rich set of program properties to be expressed and mechanically verified via type checking. However, despite their significant expressive power, dependent types have not yet advanced into mainstream programming languages. We believe the reason behind this omission is the large design space for dependently typed functional programming languages, and the consequent lack of experience in dependently-typed programming and language implementations. In this newly-started project, we lay out the design considerations for a general-purpose, effectful, functional, dependently-typed language, called PIE. The goal of this project is to promote dependently-typed programming to a mainstream practice.

compile time loops

I wonder: suppose you have a dependently typed program that loops when you try to compile it. Now you rewrite it in a non-dependently typed programming language. Would that not necessarily mean that that program would loop at run-time?

If that is the case, then I say the choice is easy. The run-time loop could remain undetected until a customer runs into it.

Working link

From Conor McBride's homepage.

Broken link

The link given in the first post seems to be broken. Here is a new link :
http://people.cs.uu.nl/andres/LambdaPi/index.html