Why Dependent Types Matter

Why Dependent Types Matter. Thorsten Altenkirch Conor McBride James McKinna

We exhibit the rationale behind the design of Epigram, a dependently typed programming language and interactive program development system using refinements of a well known program, merge sort, as a running example. We discuss the relationship to other proposals to introduce aspects of dependent types into functional programming languages and sketch some topics for further work in this area.

Epigram, dependent types, general reucrsion, indexed datatypes - it's all here!

Enjoy.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Covered before

This paper was already covered here. But I suppose that if it's a good paper then it doesn't hurt to remind people about it again :). I for one haven't read it yet.

Home page

It deserved to be on the home page...

Epigram

The authors want to promote depedent typing and Epigram, I suppose, but these are conflicting goals. I think explaining the ideas using a more conventional language (with the appropriate type system extensions etc.) would make the paper more accessible.

Am I the only one who thinks this is an issue?

How to talk dep. types

Are you suggesting they use a lanaguage that they aren't developing, or that they invent language extensions for the purpose of writing this (and other) papers?

Language of Choice

Are you suggesting they use a lanaguage that they aren't developing

Imagine the horror... ;-)

Seriously, what Ihad in mind is that for expository purposes you can explain you ideas using more conventional notation, even if the extensions you use aren't implemented, and exist only on paper.

This isn't to say that they should abandon Epigram. I was merely talking about the best way to evangelize dependent type systems.

But this discussion does bring us to a more general question, i.e. does it really make sense for language researchers to invent new languages for each feature set they want to explore?

Notice that not everyone follows this approach. This is just one style of doing PL research.

features

Ehud was asking whether it makes sense to design a new language for every feature we want to explore. Obviously not. On the other hand I see people adding feature after feature to Haskell's type system, and can't help noticing that most of the new and many of the old features would be subsumed by having a access to full dependent types. Hence, I would say that while a new language for every feature is certainly not a good idea, at some point it pays off to be more radical and move to a new language instead of trying to fix the old one.

Sure

I agree, of course.

My original point was that I think many more would come to appreciate the ideas you are advocating in this paper if at least some of the example would have been given using more familiar syntax. That's more of an educational/propaganda obvervation than a research one, of course.

Need both

You need a language where the expression of the ideas is natural and easy. But you also need to show how these things can be expressed in the more common languages (warts and all).

clear and direct in Epigram

I disagree, in this case. This paper gave me a bunch of "aha" moments about dependent typing, and inspired me to start playing around with Epigram. I doubt I would have been so excited if it had been loaded down with encodings into GADTs and caveats about current FP implementations.

Finally we see an example of using dependent types to enforce something more complicated than a datatype shape constraint. The examples I'd seen up until now were so small that I didn't see how dependent types (or GADTs) could help me prove anything interesting about my programs.

I've been reading both the Omega papers and the Epigram papers as they appear on LtU; it's wonderful to have these exemplars of dependent typing developing in parallel. Things are moving again in the area of programs+proofs, and it's a heady time!

I look forward to Omega-style FP being mainstream within a few years; I'm sure GADTs will help us to write incrementally more-typed programs.

On the other hand, Epigram knocks my socks off, even though it's eminently less practical. It's the first formalism I've seen in which both programs and proofs felt natural. A bit more friendliness in the editor, and I could see myself getting addicted...

revolution or evolution

No Ehud, you are not the only one... Haskell is evolving towards dependent types (e.g. GADTs) and that's a good thing. Epigram is a more radical departure, I believe we discuss the tension in the conclusions of the paper.

Where to get background info?

I'm a professional programmer and I'd like to understand this paper, but I don' t have a degree in CS, so I'm a bit lost when it comes to formalisms. For example, I have no idea what this sentence means: "As a proof method, general recursion is wholly bogus--its type, VP => (P -> P) -> P is a blatant lie."

Where can I go pick up the background knowledge that this paper assumes?

Thanks.

Relevant thread

Check out this thread for references.

There are also attempts to start something like LtU'pedia. (/me eyes shapr.)

forall P . (P -> P) -> P

The link in Koray Can's reply is good. In the meantime, the statement in YDTM is based on the Curry-Howard isomorphism between logic and lambda calculus, the basis of functional programming. Types correspond to propositions, and the logical proposition in the subject line is certainly false.

This proposition corresponds to the type given to the y combinator:

y f = f (y f)

with which we can write general recursion by continuation-passing style:

fact' c 0 = 1
fact' c x = x * (c (x-1))

fact = y fact'

It is bogus (a false proposition) because it doesn't terminate/is an invalid proposition.

Google around for types and recursion and you'll find better explanations than mine.

And the symbols?

Koray & Jim - I've looked through the Getting Started thread and seen several links/papers that I'll follow up on. One thing I didn't see though, is any kind of tutorial on the symbols and notations used in this paper. I'm simply not familiar with their meaning. I didn't know that upside down A means "for all" until I inferred it from Jim's reply. Is there a resource available that will help me decode the symbols and formulas?

Thanks.

Quantifiers

Mathworld and Wikipedia can sometimes be useful for this sort of thing, although I imagine finding something if you only know the symbol could be difficult.

In this particular case, ∀ ("for all") is the universal quantifier, from logic, and it's very widely used. Its brother is ∃ ("there exists"), the existential quantifier. Here's another page about quantifiers.

I've tried to teach people autodidactism ...

I've tried to teach people autodidactism, but I've realized they have to learn it for themselves.


Symbol definition is a common problem for me too. I often jump into a new subject area and realize that no amount of tutorials will help until I know the symbols.

If there existed a symbol lookup tool or website that included contextual usage, I could teach myself new things in less time.


For example, I don't think that the concepts in the Bananas, Lenses, and Barbed Wire paper are difficult, the hardest part for me was discovering what the symbols meant.

Wikipedia is improving the lives of autodidacts (like me!), but this could still be easier. Maybe this would be a useful point for the proposed LtUpedia wiki.


--Shae Erisson - ScannedInAvian.com

googling for ∀

No kidding! I was recently trying to search for a description of propositional logic that uses a minimal set of primitives--I think you can do things like encode as ∀ a . a--but it's really hard to google for ∀!

Entities

∀, ∃

Very cool, Anton. I had never come across the HTML entities for quantifiers before.

Are those new, or have I just never noticed them?

reference

They don't have everything I want (no \mapsto!), but they've got some of the standard latex symbols.

http://www.cookwood.com/html/extras/entities.html

A simpler more direct explanation

P → P is for all Ps is taken as an axiom (or the full statement is admissible) in most logics. Since we have P → P for all propositions we can use the above (P → P) → P to prove P for all P, i.e. everything can be proven true. Via CH (Y id) (the non-terminating computation) is the proof.

backgound knowledge

Good question.

I don't know a good introduction to constructive mathematics for computer scientists. I hope that my lecture notes for my Math for Computer Science course will some day evolve to such a thing but for the moment...

In any case maybe the best way to learn the stuff is by playing around with it: Conor has written an excellent tutorial for Epigram and the system is available for free from our webpage (under reconstruction) and runs on Windows, Linux and MacOs. A new release is under development (any volunteers to help with the GUI?).

GUI for Epigram

A new release is under development (any volunteers to help with the GUI?).
I guess you mean the editor/IDE and not the GUI library for use from Epigram?

I spent a couple of days assessing efforts and ways to build an editor for Epigram in Eclipse. I have more experience in graphical editors, but Epigram calls more for a rich text. Also, at that moment (about a year ago) my understanding of dependent types was much worse than today (which is still very basic :-) ).

A big problem is that editor is actually interacting heavily with elaborator, for which I don't think there is Java API (one could try to use it via stdio). Somehow it looks that there is a lot of overlap between elaborator/compiler and editor - if only in terms of data structures.

My final suspiction is that to make any sense from engineering point of view the official IDE should share the implementation language with other subsystems, so either it should be Haskell based or the existing Epigram rewritten in Java (oh, never! :-) ). Uh, there is always an option of self-hosting, but for that Epigram must probably be embedded as rule language into some rewriting system (only half-joking here).

Epigram editor interface

One of the alternatives we consider is to have a language independent interface between elaborator and IDE. This doesn't look to difficult. In this case we could plug in different IDE variants.
You didn't say what the results of your assessment wrt. Eclispe were.
It is unlikely that we would want to rewrite Epigram in Java. I'd rather hope that we implement it in Epigram one day...

Epigram, dialog, play

One of the alternatives we consider is to have a language independent interface between elaborator and IDE.

Does it mean FFI via C, or more like a stream protocol?

You didn't say what the results of your assessment wrt. Eclispe were.

I am not sure they are worth much, as YMMV greatly, as you know. I would say, it would take from 2 weeks for base functionality to 2 months for 1st release - but working at optimal schedule and assuming interface to elaborator is ready. One of my biggest concerns is - why? There are two reasons I see for doing it in Eclipse - popularizing Epigram (as using Eclipse is pretty popular) and acquiring a good development team (as hacking Eclipse is pretty popular). OTOH, I am not sure the same cannot be said about Emacs (yes, it's not in vogue in certain circles) :-)

It is unlikely that we would want to rewrite Epigram in Java.

As I said - never! :-O

I'd rather hope that we implement it in Epigram one day...

Elaborator/compiler? Or the editor? For latter you probably need to not only ensure Turing completeness, but more importantly - interactiveness, that's why I was kidding about rewriting systems.

Epigram, partiality and the real world

For latter you probably need to not only ensure Turing completeness, but more importantly - interactiveness, that's why I was kidding about rewriting systems.

Oh the magical Turing completeness again. I view non-termination as an effect which can be swept under a monadic carpet the same way Haskell sweeps all the dirt into the IO monad. Actually not just the same way but better because we are going to have a compile-time semantics of partiality which allows us to reason about potentially non-terminating programs. Venanzio Capretta has written a paper about coinductive recursion which is the starting point and Tarmo Uustalu has talked about it last year at a Dagstuhl meeting.slides... Hope we find some time to write a paper "Partiality as a monad" soon, implementing it in Epigram will take a bit longer, some other issues need addressing first. Interactiveness can be dealt with similarily. When Epigram grows up, we want to use it for real programs!

Eclipse?

I should also address the more technical aspects of Andris' posting:

Does it mean FFI via C, or more like a stream protocol?

In the moment we are thinking about a stream protocoll.

There are two reasons I see for doing it in Eclipse - popularizing Epigram (as using Eclipse is pretty popular) and acquiring a good development team (as hacking Eclipse is pretty popular).

We'd like to have a more professional IDE, hence if one could deliver this functionality using Eclipse, this may be interesting. I wonder, how easy would it be to integrate the unusual interaction of the current Epigram implementation into something like Eclipse. In Epigram you develop and typecheck partial programs by interacting with the elaborator and refining a partial program. Conor seems to think that this wouldn't fit very well with the Eclipse framework, but you seem to be more optimistic.

Levels of reuse in this tower of abstractions

Conor seems to think that this wouldn't fit very well with the Eclipse framework, but you seem to be more optimistic.
I might see his point. Epigram editor is very unlike Java editor, so you cannot, for example, inherit from its implementation. Luckily, Eclipse does not force you - it is just a bunch of plug-ins interacting with each other via interfaces defined by some other plug-ins, so the depth of integration is really how many interfaces you support from the most popular plug-ins (most importantly, those forming vanilla Eclipse).

You can use Eclipse just as a platform of distribution and launch (not very useful), you can build your editor from ground up but make sure it implements the base interface of editors (doable, we did it before Eclipse went 1.0, as there was nothing better yet), you can extend the generic text editor, you can build grpaphical editor on top of GEF, you can even generate it if you really like (I don't).

In Epigram you develop and typecheck partial programs by interacting with the elaborator and refining a partial program.
In (Eclipse) Java you develop and typecheck partial programs by interacting with the [code assist, quick fixes, refactorers, source generating wizards, cheat sheets, code templates] and refining a partial program.

Don't take me wrong, I do see how Epigram is revolutionary, I am just trying to say that mainstream already has pieces of it (maaany small pieces).

One of my gravest concerns is that Epigram will not reuse all the mechanisms for these [code assist, quick fixes, refactorers, source generating wizards, cheat sheets, code templates], unifying their functionality under the elaborator :-)

So the current question is what do you mean by integration. You certainly will not be able to reuse the functionality you do not need, so savings on efforts by going with Eclipse might be not as tremendous (and possibly not otweighting the cost of inferior implementation PL). As I said, I might see Conor's point.

TeXlipse

TeXlipse is an example of an editor done in Eclipse.

This post has two purposes:

  1. It demonstrates an architecture of simple Eclipse editor.
  2. It advertises this TeX editor (I was always planning to play with TeX, the thread about Monad.Reader motivated me further).
And, of course, the cited document was typeset in TeXlipse :-)

GUIs in Haskell

Well, Haskell currently has good support for wxWidgets (via wxHaskell) and GTK. I guess the difficulty would mostly reside in implementing a good editor, as people used to a good one will expect some features.

The nice thing about wxHaskell is that it's easy to create multi-platform GUI programs, the downside is that compiled executables are huge (due to ghc linking all of wxWidgets into the resulting file, if I recall correctly).

wxHaskell

Thank you, yes we are considering to use wxHaskell. On the other hand we'd like to have an interface which doesn't restrict us to using haskell to implmenet the IDE.

Constructive Type Theory, Laziness and Relevant Implication

Having seen an Epigram demo this is definately something to watch, even if I don't fully grasp how it all works.

I have a vague recollection that intuitionistic type theory has a connection to lazy evaluation but can't find (or have foolishly missed) any reference to the evaluation of Epigram r.e. strict/lazy. AFAIK the question may be moot.

Another vague recollection is the connection of relevant implication and strict evaulation. Has there been any work r.e. the impact of Relevant implication to type theories?

That and Collatz

Uhm, do you mean type validity w.r.t. normalization? I could use some fresh enlightenment too.

Some stuff is for free; some is not. I googled a bit and found this which I am reading now for the heck of it. Maybe your answer is in there.

So I didn't answer your question. But I'll add one, I was wondering if one can define the Collatz function in Epigram. Anyone tried that?

The paper you link to is the

The paper you link to is the best I've found on sub structural logics in general and relevant logic in particular. I think it mentions that the terms of R are the types of the Iλ calculus and other interesting tidbits.

I imagine you could implement Collatz if it's total |:^)

Thanks!

Thanks for the kind words about that paper. (It was fun to write, and I taught from that material at ESSLLI 2001).

If you pass around links, better to refer people to the page at consequently.org, as the old paper on phil.mq.edu.au (where I used to work) could disappear at any time.

The question of how to do interesting type theory with regard to relevance is not completely clear. Work of Wadler's on linear logic and types is obviously apposite here. The λI connection goes all the way back to Church (who is one of the inventors of the implicational fragment of the relevant Logic R). The interesting issue is what to do with other type constructors or for other logical connectives. (The same issue, I suppose, thought of from the point of view of a type theorist on the one hand or a logician on another hand.) One interesting recent take is in a paper by Neil Leslie and Ed Mares, entitled "CHR: A Constructive Relevant Natural-deduction Logic" in the semi-free Electronic Notes in Theoretical Computer Science 91: 158-170 (2004). (Link here.)

(I've got some of my own thoughts on this topic, going back to Prawitz's natural deduction formulation of a logic like R, and hopefully soon they'll make their way into the Proof and Counterexample manuscript.)